markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Run the following cell to get the first data set from the list. This will return a DataFrame and assign it to the variable d2:
d2 = pixiedust.sampleData(1)
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Pass the sample data set (d2) into the display() API:
display(d2)
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
You can also download data from a CSV file into a DataFrame which you can use with the display() API:
d3 = pixiedust.sampleData("https://openobjectstore.mybluemix.net/misc/milliondollarhomes.csv")
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
PixieDust LogPixieDust comes complete with logging to help you troubleshoot issues. You can find more info at https://ibm-watson-data-lab.github.io/pixiedust/logging.html. To access the log run the following cell:
% pixiedustLog -l debug
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Environment Info.The following cells will print out information related to your notebook environment.
%%scala val __scala_version = util.Properties.versionNumberString import platform print('PYTHON VERSON = ' + platform.python_version()) print('SPARK VERSON = ' + sc.version) print('SCALA VERSON = ' + __scala_version)
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
twoDim> Code for a 2-D problem.
#hide from nbdev.showdoc import *
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
2 dimensional case (PDE)We consider the following 2-D problem:$$\nabla\cdot\left(\kappa(x)\nabla u(x)\right)=f(x) \quad\forall x\in D=[0,1]^{2}$$$$u(x)=0\quad\forall x\in\partial D$$where here $f$ is again a random forcing term, assumed to be a GP in this work. Variational formulationThe variational formulation is given by:$$a(u,v)=L(v)$$where:$$a(u,v)=\int_{D}\nabla u\cdot\left(\kappa\nabla u\right)dx$$and$$L(v)=\int_{D}fvdx$$ We will make the following choices for $\kappa,f$:$$\kappa(x)=1$$$$f\sim\mathcal{G}\mathcal{P}(\bar{f},k_{f})$$$$\bar{f}(x)=1$$$$ k_{f}(x,y) = \sigma_f^{2}\exp\left(-\frac{\|x-y\|^2}{2l_f^2}\right)$$$$ \sigma_{f} = 0.1$$$$ l_f = 0.4 $$where $\|\cdot\|$ is the usual Euclidean norm. Since we do not have access to a suitable Green's function for this problem, we will have to estimate the rate of convergence of the statFEM prior and posterior by comparing them on a sequence of refined meshes. More details on this will follow later. Thus, we need similar code as for the 1-D problem. statFEM prior meanWe will again utilise FEniCS to obtain the statFEM prior mean. For this purpose, we create a function `mean_assembler` which will assemble the mean for the statFEM prior.
#export from dolfin import * import numpy as np from scipy import integrate from scipy.spatial.distance import cdist from scipy.linalg import sqrtm from scipy.sparse import csr_matrix from scipy.sparse.linalg import spsolve from scipy.interpolate import interp1d from joblib import Parallel, delayed import multiprocessing # code to assemble the mean for a given mesh size def mean_assembler(h,f_bar): "This function assembles the mean for the statFEM prior for our 2-D problem" # get size of the grid J = int(np.round(1/h)) # set up the mesh and function space for FEM mesh = UnitSquareMesh(J,J) V = FunctionSpace(mesh,'Lagrange',1) # set up boundary condition def boundary(x, on_boundary): return on_boundary bc = DirichletBC(V, 0.0, boundary) # set up the functions κ and f κ = Constant(1.0) f = f_bar # set up the bilinear form for the variational problem u = TrialFunction(V) v = TestFunction(V) a = inner(κ*grad(u),grad(v))*dx # set up the linear form L = f*v*dx # solve the variational problem μ = Function(V) solve(a == L, μ, bc) return μ
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
`mean_assembler` takes in the mesh size `h` and the mean function `f_bar` for the forcing and computes the mean of the approximate statFEM prior, returning this as a FEniCS function. > Important: `mean_assembler` requires `f_bar` to be represented as a FEniCS function/expression/constant. Let's check that this is working:
h = 0.1 f_bar = Constant(1.0) μ = mean_assembler(h,f_bar) μ # check the type of μ assert type(μ) == function.function.Function
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
Let's plot $\mu$:
#hide_input # use FEniCS to plot μ plot(μ) plt.xlabel(r'$x$') plt.ylabel(r'$y$') plt.title(r'Plot of statFEM mean for $h=%.2f$'%h) plt.show()
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
statFEM prior covarianceWe will also utilise FEniCS again to obtain an approximation of our statFEM covariance function.The statFEM covariance can be approximated as follows:$$c_u^{\text{FEM}}(x,y)\approx\sum_{i,j=1}^{J}\varphi_{i}(x)Q_{ij}\varphi_{j}(y)$$where $Q=A^{-1}MC_{f}M^{T}A^{-T}$ and where the $\{\varphi_{i}\}_{i=1}^{J}$ are the FE basis functions corresponding to the interior nodes of our domain.with $C_f$ being the kernel matrix of $f$ (evaluated on the FEM grid).As we will be comparing the statFEM covariance functions for finer and finer FE mesh sizes we will need to be able to assemble the statFEM covariance function on a grid. As discussed in oneDim, we can assemble such covariance matrices in a very efficient manner. The code remains largely the same as in the 1-D case and so we do not go into as much detail here. We start by creating a function `kernMat` which assembles the covariance matrix corresponding to a covariance function `k` on a grid `grid`.
#export def kernMat(k,grid,parallel=True,translation_inv=False): "Function to compute the covariance matrix $K$ corresponding to the covariance kernel $k$ on a grid. This matrix has $ij$-th entry $K_{ij}=k(x_i,x_j)$ where $x_i$ is the $i$-th point of the grid." # get the length of the grid n = grid.shape[0] # preallocate an n x n array of zeros to hold the cov matrix K = np.zeros((n,n)) # check if the cov matrix should be computed in parallel if parallel: # compute the cov matrix in parallel by computing the upper triangular part column by column # set up function to compute the ith column of the upper triangular part: def processInput(i): return np.array([k(grid[i,:],grid[j,:]) for j in range(i,n)]) # get the number of cpu cores present and compute the upper triangular columns in parallel num_cores = multiprocessing.cpu_count() results = Parallel(n_jobs=num_cores)(delayed(processInput)(i) for i in range(n)) # store the results in the appropriate positions in K #for (i,v) in enumerate(results[0:n-1]): for (i,v) in enumerate(results): # is this correct??? K[i,i:] = v # only the upper triangular part has been formed, so use the symmetry of the cov mat to get full K: K = K + K.T - np.diag(K.diagonal()) return K elif translation_inv: # reshape grid so that it has correct dimensions grid = grid.reshape(n,-1) # compute the distance matrix D D = cdist(grid,grid) # evaluate the kernel function using D K = k(D) return K else: # compute the cov mat using a nested for loop for i in range(n): for j in range(i,n): K[i,j] = k(grid[i,:],grid[j,:]) K = K + K.T - np.diag(K.diagonal()) return K
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
> Note: This function takes in two optional boolean arguments `parallel` and `translation_inv`. The first of these specifies whether or not the cov matrix should be computed in parallel and the second specifies whether or not the cov kernel is translation invariant. If it is, the covariance matrix is computed more efficiently using the `cdist` function from scipy. Let's quickly test if this function is working, by computing the cov matrix for white noise, which has kernel function $k(x,y)=\delta(x-y)$. For a grid of length $N$ this should be the $N\times N$ identity matrix.
# set up the kernel function # set up tolerance for comparison tol = 1e-16 def k(x,y): if (np.abs(x-y) < tol).all(): # x == y within the tolerance return 1.0 else: # x != y within the tolerance return 0.0 # set up grid n = 21 x_range = np.linspace(0,1,n) grid = np.array([[x,y] for x in x_range for y in x_range]) N = len(grid) # get length of grid (N=n^2) K = kernMat(k,grid,True,False) # parallel mode # check that this is the N x N identity matrix assert (K == np.eye(N)).all()
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
We now create a function `BigPhiMat` to utilise FEniCS to efficiently compute the matrix $\boldsymbol{\Phi}$ defined above.
#export def BigPhiMat(J,grid): "Function to compute the $\Phi$ matrix." # create the FE mesh and function space mesh = UnitSquareMesh(J,J) V = FunctionSpace(mesh,'Lagrange',1) # get the tree for the mesh tree = mesh.bounding_box_tree() # set up a function to compute the ith column of Phi corresponding to the ith grid point def Φ(i): x = grid[i] cell_index = tree.compute_first_entity_collision(Point(*x)) cell = Cell(mesh,cell_index) cell_global_dofs = V.dofmap().cell_dofs(cell_index) vertex_coordinates = cell.get_vertex_coordinates() cell_orientation = cell.orientation() data = V.element().evaluate_basis_all(x,vertex_coordinates,cell_orientation) return (data,cell_global_dofs,i*np.ones_like(cell_global_dofs)) # compute all the columns of Phi using the function above res = [Φ(i) for i in range(len(grid))] # assemble the sparse matrix Phi using the results data = np.hstack([res[i][0] for i in range(len(grid))]) row = np.hstack([res[i][1] for i in range(len(grid))]) col = np.hstack([res[i][2] for i in range(len(grid))]) return csr_matrix((data,(row,col)),shape=(V.dim(),len(grid)))
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
`BigPhiMat` takes in two arguments: `J`, which controls the FE mesh size ($h=1/J^{2}$), and `grid` which is the grid in the definition of $\boldsymbol{\Phi}$. `BigPhiMat` returns $\boldsymbol{\Phi}$ as a sparse `csr_matrix` for memory efficiency. > Note: Note that since FEniCS works with the FE functions corresponding to all the FE dofs and our statFEM cov matix only uses the FE functions corresponding to non-boundary dofs we need to account for this in the code. See the source code for `BigPhiMat` to see how this is done. We now create a function `cov_asssembler` which assembles the approximate FEM covariance matrix on the grid.
#export # function to assemble the fem covariance def cov_assembler(J,k_f,grid,parallel,translation_inv): "Function to assemble the approximate FEM covariance matrix on the reference grid." # set up mesh and function space mesh = UnitSquareMesh(J,J) V = FunctionSpace(mesh,'Lagrange',1) # set up FE grid x_grid = V.tabulate_dof_coordinates() # set up boundary condition def boundary(x, on_boundary): return on_boundary bc = DirichletBC(V, 0.0, boundary) # get the boundary and interior dofs bc_dofs = bc.get_boundary_values().keys() first, last = V.dofmap().ownership_range() all_dofs = range(last - first) interior_dofs = list(set(all_dofs) - set(bc_dofs)) bc_dofs = list(set(bc_dofs)) # set up the function κ κ = Constant(1.0) # get the mass and stiffness matrices as sparse csr_matrices u = TrialFunction(V) v = TestFunction(V) mass_form = u*v*dx a = inner(κ*grad(u),grad(v))*dx M = assemble(mass_form) A = assemble(a) M = as_backend_type(M).mat() A = as_backend_type(A).mat() M = csr_matrix(M.getValuesCSR()[::-1],shape=M.size) A = csr_matrix(A.getValuesCSR()[::-1],shape=A.size) # extract the submatrices corresponding to the interior dofs M = M[interior_dofs,:][:,interior_dofs] A = A[interior_dofs,:][:,interior_dofs] # get the forcing cov matrix on the interior nodes of the grid Σ_int = kernMat(k_f,x_grid[interior_dofs],parallel,translation_inv) # form the matrix Q in the defintion of the approximate FEM cov mat # Note: overwrite Σ_int for memory efficiency. # Σ_int = M @ Σ_int @ M.T Σ_int = Σ_int @ M.T Σ_int = M @ Σ_int Σ_int = spsolve(A,Σ_int) Σ_int = spsolve(A,Σ_int.T).T # ensure Σ_int is symmetric Σ_int = 0.5*(Σ_int + Σ_int.T) # get big phi matrix on the grid (extracting only the rows corresponding to the # interior dofs) Phi = BigPhiMat(J,grid)[interior_dofs,:] #print("Computed Phi") # assemble cov mat on grid using Phi and Σ_int Σ = Phi.T @ Σ_int @ Phi # ensure Σ is symmetric and return Σ = 0.5*(Σ + Σ.T) return Σ
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
`cov_assembler` takes in several arguments which are explained below:- `J`: controls the FE mesh size ($h=1/J^{2})$- `k_f`: the covariance function for the forcing $f$- `grid`: the reference grid where the FEM cov matrix should be computed on- `parallel`: boolean argument indicating whether the intermediate computation of $C_f$ should be done in parallel - `translation_inv`: boolean argument indicating whether the intermediate computation of $C_f$ should be computed assuming `k_f` is translation invariant or not As a quick demonstration that the code is working, we will the statFEM cov matrix for a relatively coarse grid.
# set up kernel function for forcing f_bar = Constant(1.0) l_f = 0.4 σ_f = 0.1 def k_f(x): return (σ_f**2)*np.exp(-(x**2)/(2*(l_f**2))) # set up grid n = 21 x_range = np.linspace(0,1,n) grid = np.array([[x,y] for x in x_range for y in x_range]) # get the statFEM grid for a particular choice of J J = 10 Σ = cov_assembler(J,k_f,grid,False,True)
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
Let's plot a heatmap of the statFEM cov matrix:
#hide_input sns.heatmap(Σ,cbar=True, annot=False, xticklabels=False, yticklabels=False, cmap=cm.viridis) plt.title('Heat map of statFEM covariance matrix') plt.show()
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
> Note: that the banded structure in the above statFEM covariance matrix is due to the internal ordering of the FE grid in FEniCS. statFEM posterior meanThe statFEM posterior from incorporating sensor readings has the same form as given in oneDim. We will thus require very similar code as to the 1-D case. We start by creating a function `m_post` which evaluates the posterior mean at a given point.
#export def m_post(x,m,c,v,Y,B): "This function evaluates the posterior mean at the point $x$." m_vect = np.array([m(y_i) for y_i in Y]).flatten() c_vect = c(x).flatten() # compute the update term update = c_vect @ np.linalg.solve(B,m_vect-v) # return m_post return (m(x) - update)
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
`m_post` takes in several arguments which are explained below:- `x`: point where the posterior mean will be evaluated- `m`: function which computes the prior mean at a given point y- `c`: function which returns the vector (c(x,y)) for y in Y (note: c is the prior covariance function)- `v`: vector of noisy sensor readings- `Y`: vector of sensor locations- `B`: the matrix $\epsilon^{2}I+C_Y$ to be inverted in order to obtain the posterior We now require code to generate samples from a GP with mean $m$ and cov function $k$ on a grid. We write the function `sample_gp` for this purpose.
#export def sample_gp(n_sim,m,k,grid,par,trans,tol=1e-9): "Function to sample a GP with mean $m$ and cov $k$ on a grid." # get length of grid d = len(grid) # construct mean vector μ = np.array([m(x) for x in grid]).reshape(d,1) # construct covariance matrix Σ = kernMat(k,grid,parallel=par,translation_inv=trans) # construct the cholesky decomposition Σ = GG^T # we add a small diagonal perturbation to Σ to ensure it # strictly positive definite G = np.linalg.cholesky(Σ + tol * np.eye(d)) # draw iid standard normal random vectors Z = np.random.normal(size=(d,n_sim)) # construct samples from GP(m,k) Y = G@Z + np.tile(μ,n_sim) # return the sampled fields return Y
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
`sample_gp` takes in several arguments which are explained below:- `n_sim`: number of trajectories to be sampled- `m`: mean function for the GP- `k`: cov function for the GP- `grid`: grid of points on which to sample the GP- `par`: boolean argument indicating whether the computation of the cov matrix should be done in parallel- `trans`: boolean argument indicating whether the computation of the cov matrix should be computed assuming `k` is translation invariant or not- `tol`: controls the size of the tiny diagonal perturbation added to cov matrix to ensure it is strictly positive definite (defaults to `1e-9`) As a quick demonstration that the code is working lets generate 2 realisations of white noise, using the kernel `k` from one of the previous tests and plot a heatmap of these random fields side-by-side.
#hide_input n = 41 x_range = np.linspace(0,1,n) grid = np.array([[x,y] for x in x_range for y in x_range]) # set up mean def m(x): return 0.0 np.random.seed(23534) samples = sample_gp(2,m,k,grid,True,False) sample_1 = samples[:,0].flatten() sample_2 = samples[:,1].flatten() vmin = min(sample_1.min(),sample_2.min()) vmax = max(sample_1.max(),sample_2.max()) cmap = cm.jet norm = colors.Normalize(vmin=vmin,vmax=vmax) x = grid[:,0].flatten() y = grid[:,1].flatten() triang = tri.Triangulation(x,y) plt.rcParams['figure.figsize'] = (12,6) fig, axs = plt.subplots(ncols=3, gridspec_kw=dict(width_ratios=[4,4,0.2])) axs[0].tricontourf(triang,sample_1.flatten(),cmap=cmap) axs[1].tricontourf(triang,sample_2.flatten(),cmap=cmap) cb = colorbar.ColorbarBase(axs[2],cmap=cmap,norm=norm) fig.suptitle('Realisations of white-noise fields') plt.show()
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
Let's also quickly generate 2 realisations for the kernel `k_f` above:
#hide_input np.random.seed(534) samples = sample_gp(2,m,k_f,grid,False,True) sample_1 = samples[:,0].flatten() sample_2 = samples[:,1].flatten() vmin = min(sample_1.min(),sample_2.min()) vmax = max(sample_1.max(),sample_2.max()) cmap = cm.jet norm = colors.Normalize(vmin=vmin,vmax=vmax) x = grid[:,0].flatten() y = grid[:,1].flatten() triang = tri.Triangulation(x,y) plt.rcParams['figure.figsize'] = (12,6) fig, axs = plt.subplots(ncols=3, gridspec_kw=dict(width_ratios=[4,4,0.2])) axs[0].tricontourf(triang,sample_1.flatten(),cmap=cmap) axs[1].tricontourf(triang,sample_2.flatten(),cmap=cmap) cb = colorbar.ColorbarBase(axs[2],cmap=cmap,norm=norm) fig.suptitle(r'Realisations of random fields with covariance $k_f$') plt.show()
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
The next bit of code we require is code to generate noisy sensor readings from our system. We write the function `gen_sensor` for this purpose.
#export def gen_sensor(ϵ,m,k,Y,J,par,trans,tol=1e-9,require=False): "Function to generate noisy sensor readings of the solution u on a sensor grid Y." # get number of sensors from the sensor grid Y s = len(Y) # create FEM space and grid mesh = UnitSquareMesh(J,J) V = FunctionSpace(mesh,'Lagrange',1) grid = V.tabulate_dof_coordinates() # sample a single f on the grid f_sim = sample_gp(1,m,k,grid,par=par,trans=trans,tol=tol) # set up a FEM function for this realisation f = Function(V) f.vector().set_local(f_sim.flatten()) # use FENICS to find the corresponding solution u # set up boundary condition def boundary(x, on_boundary): return on_boundary bc = DirichletBC(V, 0.0, boundary) # set up the function κ κ = Constant(1.0) # set up the bilinear form for the variational problem u = TrialFunction(V) v = TestFunction(V) a = inner(κ*grad(u),grad(v))*dx # set up the linear form L = f*v*dx # solve the variational problem u_sol = Function(V) solve(a == L, u_sol, bc) # get solution on grid Y: u_Y = np.array([u_sol(y_i) for y_i in Y]) # add N(0,ϵ^2) to each evaluation point u_S = u_Y + ϵ*np.random.normal(size=s) if require: return u_S, f_sim, u_sol else: return u_S
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
`gen_sensor` takes in several arguments which are explained below:- `ϵ`: controls the amount of sensor noise- `m`: mean function for the forcing f- `k`: cov function for the forcing f- `Y`: vector of sensor locations- `J`: controls the FE mesh size ($h=1/J^{2}$)- `par`: boolean argument indicating whether the computation of the forcing cov matrix should be done in parallel- `trans`: boolean argument indicating whether the computation of the forcing cov matrix should be computed assuming `k` is translation invariant or not- `tol`: controls the size of the tiny diagonal perturbation added to forcing cov matrix to ensure it is strictly positive definite (defaults to `1e-9`)- `require` : boolean argument indicating whether or not to also return the realisation of the forcing `f_sim` and the FEniCS solution `u_sol` (defaults to `False`) > Warning: Since we do not have access to the true solution we must use FEniCS to get the solution for our system. Thus, one must choose a small enough `J` in `gen_sensor` above to ensure we get realistic noisy sensor readings. Let's demonstrate that this code is working, by generating $s=25$ sensor observations with the sensors equally space in the domain $D$.
# set up mean function for forcing def m_f(x): return 1.0 # set up sensor grid and sensor noise level ϵ = 0.2 s = 25 s_sqrt = int(np.round(np.sqrt(s))) Y_range = np.linspace(0.01,0.99,s_sqrt) Y = np.array([[x,y] for x in Y_range for y in Y_range]) J_fine = 100 # FE mesh size to compute solution on # generate the sensor observations np.random.seed(235) v_dat = gen_sensor(ϵ,m_f,k_f,Y,J_fine,False,True) #export class MyExpression(UserExpression): "Class to allow users to user their own functions to create a FEniCS UserExpression." def eval(self, value, x): value[0] = self.f(x) def value_shape(self): return () show_doc(MyExpression,title_level=4)
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
We now require code which will create the matrix $C_Y,h$ and the function $\mathbf{c}^{(h)}$ required for the statFEM posterior mean. We will create the function `fem_cov_assembler_post` for this purpose.
#export def fem_cov_assembler_post(J,k_f,Y,parallel,translation_inv): "Function to create the matrix $C_{Y,h}$ and the vector function $c^{(h)}$ required for the statFEM posterior mean." # set up mesh and function space mesh = UnitSquareMesh(J,J) V = FunctionSpace(mesh,'Lagrange',1) tree = mesh.bounding_box_tree() # set up grid x_grid = V.tabulate_dof_coordinates() # set up boundary condition def boundary(x, on_boundary): return on_boundary bc = DirichletBC(V, 0.0, boundary) # get the boundary and interior dofs bc_dofs = bc.get_boundary_values().keys() first, last = V.dofmap().ownership_range() all_dofs = range(last - first) interior_dofs = list(set(all_dofs) - set(bc_dofs)) bc_dofs = list(set(bc_dofs)) # set up the function κ κ = Constant(1.0) # get the mass and stiffness matrices u = TrialFunction(V) v = TestFunction(V) mass_form = u*v*dx a = inner(κ*grad(u),grad(v))*dx M = assemble(mass_form) A = assemble(a) M = as_backend_type(M).mat() A = as_backend_type(A).mat() M = csr_matrix(M.getValuesCSR()[::-1],shape=M.size) A = csr_matrix(A.getValuesCSR()[::-1],shape=A.size) # extract the submatrices corresponding to the interior dofs M = M[interior_dofs,:][:,interior_dofs] A = A[interior_dofs,:][:,interior_dofs] # get the forcing cov matrix on the interior nodes of the grid Σ_int = kernMat(k_f,x_grid[interior_dofs],parallel,translation_inv) # form the matrix Q in the defintion of the approximate FEM cov mat # Note: overwrite Σ_int for memory efficiency Σ_int = M @ Σ_int @ M.T Σ_int = spsolve(A,Σ_int) Σ_int = spsolve(A,Σ_int.T).T # ensure Σ_int is symmetric Σ_int = 0.5*(Σ_int + Σ_int.T) # get big phi matrix on the sensor grid (only need the interior dofs) Phi = BigPhiMat(J,Y)[interior_dofs,:] # assemble the FEM cov mat on the sensor grid and ensure it is symmetric Σ_s = Phi.T @ Σ_int @ Phi Σ_s = 0.5*(Σ_s + Σ_s.T) # set up function to yield the vector (c(x,y)) for y in Y def Φ(x): cell_index = tree.compute_first_entity_collision(Point(*x)) cell_global_dofs = V.dofmap().cell_dofs(cell_index) cell = Cell(mesh, cell_index) vertex_coordinates = cell.get_vertex_coordinates() cell_orientation = cell.orientation() data = V.element().evaluate_basis_all(x,vertex_coordinates,cell_orientation) col = np.zeros_like(cell_global_dofs) res = csr_matrix((data,(cell_global_dofs,col)),shape=(V.dim(),1))[interior_dofs,:] return res def c_fem(x): return Φ(x).T @ Σ_int @ Phi #return Σ_s and c_fem return Σ_s, c_fem
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
`fem_cov_assembler_post` takes in several arguments which are explained below:- `J`: controls the FE mesh size ($h=1/J^2$)- `k_f`: the covariance function for the forcing $f$- `Y`: vector of sensor locations- `parallel`: boolean argument indicating whether the computation of the forcing cov mat should be done in parallel- `translation_inv`: boolean argument indicating whether the computation of the forcing cov mat should be computed assuming `k_f` is translation invariant or not With all of this code in place we can now finally write the function `m_post_fem_assmebler` which will assemble the statFEM posterior mean function.
#export def m_post_fem_assembler(J,f_bar,k_f,ϵ,Y,v_dat,par=False,trans=True): "Function to assemble the statFEM posterior mean function." # get number of sensors s = len(Y) # set up mesh and function space mesh = UnitSquareMesh(J,J) V = FunctionSpace(mesh,'Lagrange',1) # set up boundary condition def boundary(x, on_boundary): return on_boundary bc = DirichletBC(V, 0.0, boundary) # set up the functions κ and f κ = Constant(1.0) f = f_bar # set up the bilinear form for the variational problem u = TrialFunction(V) v = TestFunction(V) a = inner(κ*grad(u),grad(v))*dx # set up linear form L = f*v*dx # solve the variational problem μ_fem = Function(V) solve(a == L, μ_fem, bc) # use fem_cov_assembler_post to obtain cov mat on sensor grid and function to compute vector # (c(x,y)) for y in Y C_fem_s, c_fem = fem_cov_assembler_post(J,k_f,Y,parallel=par,translation_inv=trans) # form B_fem_s by adding noise contribution C_fem_s += (ϵ**2)*np.eye(s) # assemble function to compute posterior mean and return def m_post_fem(x): return m_post(x,μ_fem,c_fem,v_dat,Y,C_fem_s) return m_post_fem
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
`m_post_fem_assembler` takes in several arguments which are explained below:- `J`: controls the FE mesh size ($h=1/J^{2}$)- `f_bar`: the mean function for the forcing $f$- `k_f`: the covariance function for the forcing $f$- `ϵ`: controls the amount of sensor noise- `Y`: vector of sensor locations- `v_dat`: vector of noisy sensor observations- `par`: boolean argument passed to `fem_cov_assembler_post`'s argument `parallel` (defaults to `False`)- `trans`: boolean argument passed to `fem_cov_assembler_post`'s argument `translation_inv` (defaults to `True`) Let's quickly check that this function is working.
J = 20 f_bar = Constant(1.0) m_post_fem = m_post_fem_assembler(J,f_bar,k_f,ϵ,Y,v_dat) # compute posterior mean at a location x in D x = np.array([0.3,0.1]) m_post_fem(x)
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
statFEM posterior covarianceThe form of the statFEM posterior covariance remains the same as given in oneDim. Thus, we require very similar code as to the 1-D case. We start by creating a function `c_post` which evaluates the posterior covariance at a given point.
#export def c_post(x,y,c,Y,B): "This function evaluates the posterior covariance at $(x,y)$" # compute vectors c_x and c_y: c_x = np.array([c(x,y_i) for y_i in Y]) c_y = np.array([c(y_i,y) for y_i in Y]) # compute update term update = c_x @ np.linalg.solve(B,c_y) # return c_post return (c(x,y) - update)
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
`c_post` takes in several arguments which are explained below:- `x`,`y`: points to evaluate the covariance at- `c`: function which returns the prior covariance at any given pair $(x,y)$- `Y`: vector of sensor locations- `B`: the matrix $\epsilon^{2}I+C_{Y}$ to be inverted in order to obtain the posterior To compare the statFEM covariance matrices for finer and finer FE mesh sizes we will require some more code. First we create a function `post_fem_cov_assembler` which helps us to quickly assemble the statFEM posterior covariance matrix as explained in oneDim.
#export def post_fem_cov_assembler(J,k_f,grid,Y,parallel,translation_inv): "Function which assembles the matrices $Σ_X$ ,$Σ_{XY}$, and $Σ_Y$ required for the statFEM posterior covariance." # set up mesh and function space mesh = UnitSquareMesh(J,J) V = FunctionSpace(mesh,'Lagrange',1) # set up grid x_grid = V.tabulate_dof_coordinates() # set up boundary condition def boundary(x, on_boundary): return on_boundary bc = DirichletBC(V, 0.0, boundary) # get the boundary and interior dofs bc_dofs = bc.get_boundary_values().keys() first, last = V.dofmap().ownership_range() all_dofs = range(last - first) interior_dofs = list(set(all_dofs) - set(bc_dofs)) bc_dofs = list(set(bc_dofs)) # set up the function κ κ = Constant(1.0) # get the mass and stiffness matrices u = TrialFunction(V) v = TestFunction(V) mass_form = u*v*dx a = inner(κ*grad(u),grad(v))*dx M = assemble(mass_form) A = assemble(a) M = as_backend_type(M).mat() A = as_backend_type(A).mat() M = csr_matrix(M.getValuesCSR()[::-1],shape=M.size) A = csr_matrix(A.getValuesCSR()[::-1],shape=A.size) # extract the submatrices corresponding to the interior dofs M = M[interior_dofs,:][:,interior_dofs] A = A[interior_dofs,:][:,interior_dofs] # get the forcing cov matrix on the interior nodes of the grid Σ_int = kernMat(k_f,x_grid[interior_dofs],parallel,translation_inv) # form the matrix Q in the defintion of the approximate FEM cov mat # Note: overwrite Σ_int for memory efficiency Σ_int = M @ Σ_int @ M.T Σ_int = spsolve(A,Σ_int) Σ_int = spsolve(A,Σ_int.T).T # ensure Σ_int is symmetric Σ_int = 0.5*(Σ_int + Σ_int.T) # get big phi matrix on the grid (only need the interior nodes) Phi_grid = BigPhiMat(J,grid)[interior_dofs,:] # get big phi matrix on the sensor grid (only need the interior nodes) Phi_Y = BigPhiMat(J,Y)[interior_dofs,:] # assemble the FEM cov mat on the sensor grid using Σ_int and Phi_Y Σ_Y = Phi_Y.T @ Σ_int @ Phi_Y # assemble the FEM cov mat on the grid using Σ_int and Phi_grid Σ_X = Phi_grid.T @ Σ_int @ Phi_grid # assemble cross term matrix (with ijth entry c(x_i,y_j)) Σ_XY = Phi_grid.T @ Σ_int @ Phi_Y # return these sigma matrices return Σ_Y, Σ_X, Σ_XY
_____no_output_____
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
`post_fem_cov_assembler` takes in several arguments which are explained below:- `J`: controls the FE mesh size ($h=1/J^2$)- `k_f`: the covariance function for the forcing $f$- `grid`: the fixed reference grid $\{x_{i}\}_{i=1}^{N}$ on which to assemble the posterior cov mat- `Y`: vector of sensor locations.- `parallel`: boolean argument indicating whether the computation of the forcing cov mat should be done in parallel- `translation_inv`: boolean argument indicating whether the computation of the forcing cov mat should be computed assuming `k_f` is translation invariant or not Finally, we create the function `c_post_fem_assembler` which assembles the statFEM posterior cov mat on the reference grid using the matrices `post_fem_cov_assembler` returns.
#export def c_post_fem_assembler(J,k_f,grid,Y,ϵ,par,trans): "Function to assemble the statFEM posterior cov mat on a reference grid specified by grid." # use post_fem_cov_assembler to get the sigma matrices needed for posterior cov mat Σ_Y, Σ_X, Σ_XY = post_fem_cov_assembler(J,k_f,grid,Y,parallel=par,translation_inv=trans) # create the matrix B (store in Σ_Y for memory efficiency) s = len(Y) # number of sensor points Σ_Y += (ϵ**2)*np.eye(s) #form the posterior cov matrix (store in Σ_X for memory efficiency) Σ_X -= Σ_XY @ np.linalg.solve(Σ_Y,Σ_XY.T) return Σ_X #hide from nbdev.export import notebook2script; notebook2script()
Converted 00_oneDim.ipynb. Converted 01_twoDim.ipynb. Converted index.ipynb. Converted oneDim_prior_results.ipynb.
Apache-2.0
01_twoDim.ipynb
YanniPapandreou/statFEM
Regresión lineal**Temas Selectos de Modelación Numérica** Facultad de Ciencias, UNAM Semestre 2021-2En este notebook aprenderemos como hacer una regresión lineal por el método de mínimos cuadrados y por el método matricial. No olvides resolver los ejercicios de tarea al final del notebook. Entrega tu solución en un notebook en la carpeta de Classroom con el nombre `apellido_nombre_tarea05.ipynb`.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
Apache-2.0
otros/06_reg_lineal.ipynb
anakarinarm/TallerModNum
1. Mínimos cuadrados Ajuste de rectas de la forma $y=mx+b$ por mínimos cuadrados. La idea detrás de este método es que queremos encontrar la pendiente $m$ y la ordenada al origen $b$ que nos dan la recta que minimiza la suma de los cuadrados de las distancias entre los puntos (digamos, datos) y la recta ajustada:![min_cuadrados](min_cuadrados.png)Es decir, que cuando sumemos el cuadrado de todas las distancias de los puntos a la recta (líneas azules) el valor que obtengamos sea el más pequeño posible (este tipo de problemas se llaman problemas de optimización). Se pueden ajustar todo tipo de funciones por este método, no sólo rectas, pero para el caso particular de la recta, se obtiene que la pendiente y la ordenada al origen de la recta que minimiza el cuadrado de las distancias se calcula como:\begin{align}\tag{1}m =\frac{N \sum(x_iy_i) − \sum x_i\sum y_i}{N \sum(x_i^2) − (\sum x_i)^2} \label{eq1}\\\end{align}\begin{align}b = \frac{\sum y_i − m \sum x_i}{N} \tag{2}\end{align}en donde $N$ es el número de mediciones o puntos, $x_i$, $y_i$ son las mediciones y las sumas ($\sum$) son sobre todas las mediciones.**OJO**: Debido a que no es necesario graficar los datos para realizar un ajuste por mínimos cuadrados, se puede caer en errores graves como tratar de ajustar una recta a un conjunto de mediciones cuya relación no es lineal. Por eso **es muy importante graficar** los datos y asegurarse de que la relación entre las variables es lineal antes de aplicar el método de mínimos cuadrados. Siguiendo las ecuaciones anteriores definamos una función `reg_lineal` que calcule a pendiente y ordenada al origen de la recta que mejor se ajusta a los "datos" usando el método de mínimos cuadrados:
def reg_lineal(X,Y): '''Esta función calcula la pendiente y la ordenada al origen de la recta y=mx+b por mínimos cuadrados a partir de los vectores de mediciones X y Y. Input: X - arreglo de numpy 1D Y - arreglo de numpy 1D del mismo tamaño que X. Output: m, b : Escalares, la pendiente y ordenada al origen. ''' N = len(X) # numero de valores del vector X sum_xy = np.sum(X*Y) # suma de todos los Xi*Yi sum_x = np.sum(X) # suma de todas las X sum_y = np.sum(Y) # suma de todas las Y sum_x2 = np.sum(X**2) # suma de todas las Xˆ2 m = ((N*sum_xy) - (sum_x*sum_y)) / ((N*sum_x2) - (sum_x**2)) b = (sum_y - (m*sum_x)) / N return(m, b)
_____no_output_____
Apache-2.0
otros/06_reg_lineal.ipynb
anakarinarm/TallerModNum
Probemos nuestra función que calcula la regresión lineal. Para ello generamos un vector X y un vector f(X)=Y de la siguiente manera:
X = np.linspace(1,10,10) Y = 1 + 2*X + 1*np.random.randn(1) # f(x)=y=1+2x+d plt.plot(X,Y,'o') plt.xlabel('x') plt.ylabel('y=f(x)') plt.show()
_____no_output_____
Apache-2.0
otros/06_reg_lineal.ipynb
anakarinarm/TallerModNum
Ahora podemos probar la función `reg_lineal` usando X y Y:
m, b = reg_lineal(X,Y) print('La pendiente m es %f y la ordenada b es %f' %(m,b)) Y2 = m*X+b plt.plot(X,Y,'o', label='Y') plt.plot(X,Y2,'-',label='regresión' ) plt.xlabel('x') plt.ylabel('y=f(x)') plt.legend() plt.show()
La pendiente m es 2.000000 y la ordenada b es 2.756253
Apache-2.0
otros/06_reg_lineal.ipynb
anakarinarm/TallerModNum
2. Método matricial(Nota: El material de esta sección fue tomado del blog [cmdlinetips](https://cmdlinetips.com/2020/03/linear-regression-using-matrix-multiplication-in-python-using-numpy/)) También podemos hacer regresiones lineales usando el método matricial. Recordemos que en una regresión lineal queremos ajustar nuestros datos, observaciones, etc. usando el modelo lineal $$y=\beta_0+\beta_1X+\epsilon$$ y estimar los parámetros del modelo $\beta_0$ y $\beta_1$ que son la ordenada al origen y la pendiente, respectivamente.Podemos combinar las "variables predictivas", en este caso X, en una matriz que tiene un vector columna lleno de unos (lo que multiplica a $\beta_0$) y X (lo que multiplica a $\beta_1$):
X_mat = np.vstack((np.ones(len(X)), X)).T # en este caso usamos la función vstack # y el método T (transponer) para obtener las dimensiones # adecuadas
_____no_output_____
Apache-2.0
otros/06_reg_lineal.ipynb
anakarinarm/TallerModNum
Con un poco de álgebra lineal y el objetivo de minimizar el error cuadrático medio del sistema de ecuaciones lineales llegamos a que podemos calcular el valor de los parámetros $\hat{\beta}=(\beta_0, \beta_1)$ de la forma:$$\hat{\beta}=(X^T. X)^{-1}. X^T. Y$$ Podemos implementar esta ecuación usando las funciones para la inversa de una matriz y multiplicación matricial del módulo de álgebra lineal de numpy `linalg`:
beta = np.linalg.inv(X_mat.T.dot(X_mat)).dot(X_mat.T).dot(Y) print('La pendiente beta_1 es %f y la ordenada beta_0 es %f' %(beta[1], beta[0]))
La pendiente beta_1 es 2.000000 y la ordenada beta_0 es 2.756253
Apache-2.0
otros/06_reg_lineal.ipynb
anakarinarm/TallerModNum
que son los mismos valores para la pendiente y ordenada al origen que encontramos usando la función `reg_lineal`. Ahora usemos estos parámetros para estimar los valores de Y:
Y_mat = X_mat.dot(beta) plt.plot(X,Y,'o', label='Y') plt.plot(X,Y_mat,'-',label='regresión método matricial' ) plt.xlabel('x') plt.ylabel('y=f(x)') plt.legend() plt.show()
_____no_output_____
Apache-2.0
otros/06_reg_lineal.ipynb
anakarinarm/TallerModNum
3. Ejemplo usando ambos métodos
X = np.linspace(1,10) Y = 1 + 2*X + 3*X**2 + 20*np.random.randn(1) # Método matricial - ahora tenemos otro término, Xˆ2 X_mat = np.vstack((np.ones(len(X)), X, X**2)).T beta = np.linalg.inv(X_mat.T.dot(X_mat)).dot(X_mat.T).dot(Y) Y_mat = X_mat.dot(beta) # Función reg_lineal m, b = reg_lineal(X,Y) Y2 = m*X+b plt.plot(X,Y,'o',label='datos') plt.plot(X,Y_mat, label='Método matricial') plt.plot(X,Y2, label='Mínimos cuadrados - recta') plt.xlabel('X') plt.ylabel('Y') plt.legend() plt.show()
_____no_output_____
Apache-2.0
otros/06_reg_lineal.ipynb
anakarinarm/TallerModNum
Neuromatch Academy: Week 3, Day 4, Tutorial 1 Deep Learning: Decoding Neural Responses**Content creators**: Jorge A. Menendez, Carsen Stringer**Content reviewers**: Roozbeh Farhoodi, Madineh Sarvestani, Kshitij Dwivedi, Spiros Chavlis, Ella Batty, Michael Waskom --- Tutorial ObjectivesIn this tutorial, we'll use deep learning to decode stimulus information from the responses of sensory neurons. Specifically, we'll look at the activity of ~20,000 neurons in mouse primary visual cortex responding to oriented gratings recorded in [this study](https://www.biorxiv.org/content/10.1101/679324v2.abstract). Our task will be to decode the orientation of the presented stimulus from the responses of the whole population of neurons. We could do this in a number of ways, but here we'll use deep learning. Deep learning is particularly well-suited to this problem for a number of reasons:* The data are very high-dimensional: the neural response to a stimulus is a ~20,000 dimensional vector. Many machine learning techniques fail in such high dimensions, but deep learning actually thrives in this regime, as long as you have enough data (which we do here!).* As you'll be able to see below, different neurons can respond quite differently to stimuli. This complex pattern of responses will, therefore, require non-linear methods to be decoded, which we can easily do with non-linear activation functions in deep networks.* Deep learning architectures are highly flexible, meaning we can easily adapt the architecture of our decoding model to optimize decoding. Here, we'll focus on a single architecture, but you'll see that it can easily be modified with few changes to the code.More concretely, our goal will be learn how to:* Build a deep feed-forward network using PyTorch* Evaluate the network's outputs using PyTorch built-in loss functions* Compute gradients of the loss with respect to each parameter of the network using automatic differentiation* Implement gradient descent to optimize the network's parametersThis tutorial will take up the first full session (equivalent to two tutorials on other days).
#@title Video 1: Decoding from neural data using feed-forward networks in pytorch from IPython.display import YouTubeVideo video = YouTubeVideo(id="SlrbMvvBOzM", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
--- Setup
import os import numpy as np import torch from torch import nn from torch import optim import matplotlib as mpl from matplotlib import pyplot as plt #@title Data retrieval and loading import hashlib import requests fname = "W3D4_stringer_oribinned1.npz" url = "https://osf.io/683xc/download" expected_md5 = "436599dfd8ebe6019f066c38aed20580" if not os.path.isfile(fname): try: r = requests.get(url) except requests.ConnectionError: print("!!! Failed to download data !!!") else: if r.status_code != requests.codes.ok: print("!!! Failed to download data !!!") elif hashlib.md5(r.content).hexdigest() != expected_md5: print("!!! Data download appears corrupted !!!") else: with open(fname, "wb") as fid: fid.write(r.content) #@title Figure Settings %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") #@title Helper Functions def load_data(data_name=fname, bin_width=1): """Load mouse V1 data from Stringer et al. (2019) Data from study reported in this preprint: https://www.biorxiv.org/content/10.1101/679324v2.abstract These data comprise time-averaged responses of ~20,000 neurons to ~4,000 stimulus gratings of different orientations, recorded through Calcium imaginge. The responses have been normalized by spontanous levels of activity and then z-scored over stimuli, so expect negative numbers. They have also been binned and averaged to each degree of orientation. This function returns the relevant data (neural responses and stimulus orientations) in a torch.Tensor of data type torch.float32 in order to match the default data type for nn.Parameters in Google Colab. This function will actually average responses to stimuli with orientations falling within bins specified by the bin_width argument. This helps produce individual neural "responses" with smoother and more interpretable tuning curves. Args: bin_width (float): size of stimulus bins over which to average neural responses Returns: resp (torch.Tensor): n_stimuli x n_neurons matrix of neural responses, each row contains the responses of each neuron to a given stimulus. As mentioned above, neural "response" is actually an average over responses to stimuli with similar angles falling within specified bins. stimuli: (torch.Tensor): n_stimuli x 1 column vector with orientation of each stimulus, in degrees. This is actually the mean orientation of all stimuli in each bin. """ with np.load(data_name) as dobj: data = dict(**dobj) resp = data['resp'] stimuli = data['stimuli'] if bin_width > 1: # Bin neural responses and stimuli bins = np.digitize(stimuli, np.arange(0, 360 + bin_width, bin_width)) stimuli_binned = np.array([stimuli[bins == i].mean() for i in np.unique(bins)]) resp_binned = np.array([resp[bins == i, :].mean(0) for i in np.unique(bins)]) else: resp_binned = resp stimuli_binned = stimuli # Return as torch.Tensor resp_tensor = torch.tensor(resp_binned, dtype=torch.float32) stimuli_tensor = torch.tensor(stimuli_binned, dtype=torch.float32).unsqueeze(1) # add singleton dimension to make a column vector return resp_tensor, stimuli_tensor def plot_data_matrix(X, ax): """Visualize data matrix of neural responses using a heatmap Args: X (torch.Tensor or np.ndarray): matrix of neural responses to visualize with a heatmap ax (matplotlib axes): where to plot """ cax = ax.imshow(X, cmap=mpl.cm.pink, vmin=np.percentile(X, 1), vmax=np.percentile(X, 99)) cbar = plt.colorbar(cax, ax=ax, label='normalized neural response') ax.set_aspect('auto') ax.set_xticks([]) ax.set_yticks([]) def identityLine(): """ Plot the identity line y=x """ ax = plt.gca() lims = np.array([ax.get_xlim(), ax.get_ylim()]) minval = lims[:, 0].min() maxval = lims[:, 1].max() equal_lims = [minval, maxval] ax.set_xlim(equal_lims) ax.set_ylim(equal_lims) line = ax.plot([minval, maxval], [minval, maxval], color="0.7") line[0].set_zorder(-1) def get_data(n_stim, train_data, train_labels): """ Return n_stim randomly drawn stimuli/resp pairs Args: n_stim (scalar): number of stimuli to draw resp (torch.Tensor): train_data (torch.Tensor): n_train x n_neurons tensor with neural responses to train on train_labels (torch.Tensor): n_train x 1 tensor with orientations of the stimuli corresponding to each row of train_data, in radians Returns: (torch.Tensor, torch.Tensor): n_stim x n_neurons tensor of neural responses and n_stim x 1 of orientations respectively """ n_stimuli = train_labels.shape[0] istim = np.random.choice(n_stimuli, n_stim) r = train_data[istim] # neural responses to this stimulus ori = train_labels[istim] # true stimulus orientation return r, ori def stimulus_class(ori, n_classes): """Get stimulus class from stimulus orientation Args: ori (torch.Tensor): orientations of stimuli to return classes for n_classes (int): total number of classes Returns: torch.Tensor: 1D tensor with the classes for each stimulus """ bins = np.linspace(0, 360, n_classes + 1) return torch.tensor(np.digitize(ori.squeeze(), bins)) - 1 # minus 1 to accomodate Python indexing def plot_decoded_results(train_loss, test_labels, predicted_test_labels): """ Plot decoding results in the form of network training loss and test predictions Args: train_loss (list): training error over iterations test_labels (torch.Tensor): n_test x 1 tensor with orientations of the stimuli corresponding to each row of train_data, in radians predicted_test_labels (torch.Tensor): n_test x 1 tensor with predicted orientations of the stimuli from decoding neural network """ # Plot results fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 6)) # Plot the training loss over iterations of GD ax1.plot(train_loss) # Plot true stimulus orientation vs. predicted class ax2.plot(stimuli_test.squeeze(), predicted_test_labels, '.') ax1.set_xlim([0, None]) ax1.set_ylim([0, None]) ax1.set_xlabel('iterations of gradient descent') ax1.set_ylabel('negative log likelihood') ax2.set_xlabel('true stimulus orientation ($^o$)') ax2.set_ylabel('decoded orientation bin') ax2.set_xticks(np.linspace(0, 360, n_classes + 1)) ax2.set_yticks(np.arange(n_classes)) class_bins = [f'{i * 360 / n_classes: .0f}$^o$ - {(i + 1) * 360 / n_classes: .0f}$^o$' for i in range(n_classes)] ax2.set_yticklabels(class_bins); # Draw bin edges as vertical lines ax2.set_ylim(ax2.get_ylim()) # fix y-axis limits for i in range(n_classes): lower = i * 360 / n_classes upper = (i + 1) * 360 / n_classes ax2.plot([lower, lower], ax2.get_ylim(), '-', color="0.7", linewidth=1, zorder=-1) ax2.plot([upper, upper], ax2.get_ylim(), '-', color="0.7", linewidth=1, zorder=-1) plt.tight_layout()
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
--- Section 1: Load and visualize dataIn the next cell, we have provided code to load the data and plot the matrix of neural responses.Next to it, we plot the tuning curves of three randomly selected neurons.
#@title #@markdown Execute this cell to load and visualize data # Load data resp_all, stimuli_all = load_data() # argument to this function specifies bin width n_stimuli, n_neurons = resp_all.shape print(f'{n_neurons} neurons in response to {n_stimuli} stimuli') fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(2 * 6, 5)) # Visualize data matrix plot_data_matrix(resp_all[:100, :].T, ax1) # plot responses of first 100 neurons ax1.set_xlabel('stimulus') ax1.set_ylabel('neuron') # Plot tuning curves of three random neurons ineurons = np.random.choice(n_neurons, 3, replace=False) # pick three random neurons ax2.plot(stimuli_all, resp_all[:, ineurons]) ax2.set_xlabel('stimulus orientation ($^o$)') ax2.set_ylabel('neural response') ax2.set_xticks(np.linspace(0, 360, 5)) plt.tight_layout()
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
We will split our data into a training set and test set. In particular, we will have a training set of orientations (`stimuli_train`) and the corresponding responses (`resp_train`). Our testing set will have held-out orientations (`stimuli_test`) and the corresponding responses (`resp_test`).
#@title #@markdown Execute this cell to split into training and test sets # Set random seeds for reproducibility np.random.seed(4) torch.manual_seed(4) # Split data into training set and testing set n_train = int(0.6 * n_stimuli) # use 60% of all data for training set ishuffle = torch.randperm(n_stimuli) itrain = ishuffle[:n_train] # indices of data samples to include in training set itest = ishuffle[n_train:] # indices of data samples to include in testing set stimuli_test = stimuli_all[itest] resp_test = resp_all[itest] stimuli_train = stimuli_all[itrain] resp_train = resp_all[itrain]
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
--- Section 2: Deep feed-forward networks in *pytorch* We'll now build a simple deep neural network that takes as input a vector of neural responses and outputs a single number representing the decoded stimulus orientation.To keep things simple, we'll build a deep network with **one** hidden layer. See the appendix for a deeper discussion of what this choice entails, and when one might want to use deeper/shallower and wider/narrower architectures.Let $\mathbf{r}^{(n)} = \begin{bmatrix} r_1^{(n)} & r_2^{(n)} & \ldots & r_N^{(n)} \end{bmatrix}^T$ denote the vector of neural responses (of neurons $1, \ldots, N$) to the $n$th stimulus. The network we will use is described by the following set of equations:\begin{align} \mathbf{h}^{(n)} &= \mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in}, && [\mathbf{W}^{in}: M \times N], \\ y^{(n)} &= \mathbf{W}^{out} \mathbf{h}^{(n)} + \mathbf{b}^{out}, && [\mathbf{W}^{out}: 1 \times M],\end{align}where $y^{(n)}$ denotes the scalar output of the network: the decoded orientation of the $n$th stimulus. The $M$-dimensional vector $\mathbf{h}^{(n)}$ denotes the activations of the **hidden layer** of the network. The blue components of this diagram denote the **parameters** of the network, which we will later optimize with gradient descent. These include all the weights and biases $\mathbf{W}^{in}, \mathbf{b}^{in}, \mathbf{W}^{out}, \mathbf{b}^{out}$. Section 2.1: Introduction to PyTorchHere, we'll use the **PyTorch** package to build, run, and train deep networks of this form in Python. There are two core components to the PyTorch package: 1. The first is the `torch.Tensor` data type used in PyTorch. `torch.Tensor`'s are effectively just like a `numpy` arrays, except that they have some important attributes and methods needed for automatic differentiation (to be discussed below). They also come along with infrastructure for easily storing and computing with them on GPU's, a capability we won't touch on here but which can be really useful in practice.2. The second core ingredient is the PyTorch `nn.Module` class. This is the class we'll use for constructing deep networks, so that we can then easily train them using built-in PyTorch functions. Keep in my mind that `nn.Module` classes can actually be used to build, run, and train any model -- not just deep networks! The next cell contains code for building the deep network we defined above using the `nn.Module` class. It contains three key ingredients: * `__init__()` method to initialize its parameters, like in any other Python class. In this case, it takes two arguments: * `n_inputs`: the number of input units. This should always be set to the number of neurons whose activities are being decoded (i.e. the dimensionality of the input to the network). * `n_hidden`: the number of hidden units. This is a parameter that we are free to vary in deciding how to build our network. See the appendix for a discussion of how this architectural choice affects the computations the network can perform. * `nn.Linear` modules, which are built-in PyTorch classes containing all the weights and biases for a given network layer (documentation [here](https://pytorch.org/docs/master/generated/torch.nn.Linear.html)). This class takes two arguments to initialize: * \ of inputs to that layer * \ of outputs from that layer For the input layer, for example, we have: * \ of inputs = \ of neurons whose responses are to be decoded ($N$, specified by `n_inputs`) * \ of outputs = \ of hidden layer units ($M$, specified by `n_hidden`) PyTorch will initialize all weights and biases randomly. * `forward()` method, which takes as argument an input to the network and returns the network output. In our case, this comprises computing the output $y$ from a given input $\mathbf{r}$ using the above two equations. See the next cell for code implementing this computation using the built-in PyTorch `nn.Linear` classes.
class DeepNet(nn.Module): """Deep Network with one hidden layer Args: n_inputs (int): number of input units n_hidden (int): number of units in hidden layer Attributes: in_layer (nn.Linear): weights and biases of input layer out_layer (nn.Linear): weights and biases of output layer """ def __init__(self, n_inputs, n_hidden): super().__init__() # needed to invoke the properties of the parent class nn.Module self.in_layer = nn.Linear(n_inputs, n_hidden) # neural activity --> hidden units self.out_layer = nn.Linear(n_hidden, 1) # hidden units --> output def forward(self, r): """Decode stimulus orientation from neural responses Args: r (torch.Tensor): vector of neural responses to decode, must be of length n_inputs. Can also be a tensor of shape n_stimuli x n_inputs, containing n_stimuli vectors of neural responses Returns: torch.Tensor: network outputs for each input provided in r. If r is a vector, then y is a 1D tensor of length 1. If r is a 2D tensor then y is a 2D tensor of shape n_stimuli x 1. """ h = self.in_layer(r) # hidden representation y = self.out_layer(h) return y
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
The next cell contains code for initializing and running this network. We use it to decode stimulus orientation from a vector of neural responses to the very first stimulus. Note that when the initialized network class is called as a function on an input (e.g. `net(r)`), its `.forward()` method is called. This is a special property of the `nn.Module` class.Note that the decoded orientations at this point will be nonsense, since the network has been initialized with random weights. Below, we'll learn how to optimize these weights for good stimulus decoding.
# Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize a deep network with M=200 hidden units net = DeepNet(n_neurons, 200) # Get neural responses (r) to and orientation (ori) to one stimulus in dataset r, ori = get_data(1, resp_train, stimuli_train) # using helper function get_data # Decode orientation from these neural responses using initialized network out = net(r) # compute output from network, equivalent to net.forward(r) print('decoded orientation: %.2f degrees' % out) print('true orientation: %.2f degrees' % ori)
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
--- Section 2.2: Activation functions
#@title Video 2: Nonlinear activation functions from IPython.display import YouTubeVideo video = YouTubeVideo(id="JAdukDCQALA", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
Note that the deep network we constructed above comprises solely **linear** operations on each layer: each layer is just a weighted sum of the elements in the previous layer. It turns out that linear hidden layers like this aren't particularly useful, since a sequence of linear transformations is actually essentially the same as a single linear transformation. We can see this from the above equations by plugging in the first one into the second one to obtain\begin{equation} y^{(n)} = \mathbf{W}^{out} \left( \mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in} \right) + \mathbf{b}^{out} = \mathbf{W}^{out}\mathbf{W}^{in} \mathbf{r}^{(n)} + \left( \mathbf{W}^{out}\mathbf{b}^{in} + \mathbf{b}^{out} \right)\end{equation}In other words, the output is still just a weighted sum of elements in the input -- the hidden layer has done nothing to change this.To extend the set of computable input/output transformations to more than just weighted sums, we'll incorporate a **non-linear activation function** in the hidden units. This is done by simply modifying the equation for the hidden layer activations to be\begin{equation} \mathbf{h}^{(n)} = \phi(\mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in})\end{equation}where $\phi$ is referred to as the activation function. Using a non-linear activation function will ensure that the hidden layer performs a non-linear transformation of the input, which will make our network much more powerful (or *expressive*, cf. appendix). In practice, deep networks *always* use non-linear activation functions. Exercise 1: Nonlinear Activations Create a new class `DeepNetReLU` by modifying our above deep network model to use a non-linear activation function. We'll use the linear rectification function:\begin{equation} \phi(x) = \begin{cases} x & \text{if } x > 0 \\ 0 & \text{else} \end{cases}\end{equation}which can be implemented in PyTorch using `torch.relu()`. Hidden layers with this activation function are typically referred to as "**Re**ctified **L**inear **U**nits", or **ReLU**'s.Initialize this network with 20 hidden units and run on an example stimulus.**Hint**: you only need to modify the `forward()` method of the above `DeepNet()` class.
class DeepNetReLU(nn.Module): def __init__(self, n_inputs, n_hidden): super().__init__() # needed to invoke the properties of the parent class nn.Module self.in_layer = nn.Linear(n_inputs, n_hidden) # neural activity --> hidden units self.out_layer = nn.Linear(n_hidden, 1) # hidden units --> output def forward(self, r): ############################################################################ ## TO DO for students: write code for computing network output using a ## rectified linear activation function for the hidden units # Fill out function and remove raise NotImplementedError("Student exercise: complete DeepNetReLU forward") ############################################################################ h = ... y = ... return y # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Get neural responses (r) to and orientation (ori) to one stimulus in dataset r, ori = get_data(1, resp_train, stimuli_train) # Uncomment to test your class # Initialize deep network with M=20 hidden units and uncomment lines below # net = DeepNetReLU(...) # Decode orientation from these neural responses using initialized network # net(r) is equivalent to net.forward(r) # out = net(r) # print('decoded orientation: %.2f degrees' % out) # print('true orientation: %.2f degrees' % ori)
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D4_DeepLearning1/solutions/W3D4_Tutorial1_Solution_5bdc2033.py) You should see that the decoded orientation is 0.13 $^{\circ}$ while the true orientation is 139.00 $^{\circ}$. --- Section 3: Loss functions and gradient descent
#@title Video 3: Loss functions & gradient descent from IPython.display import YouTubeVideo video = YouTubeVideo(id="aEtKpzEuviw", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
Section 3.1: Loss functionsBecause the weights of the network are currently randomly chosen, the outputs of the network are nonsense: the decoded stimulus orientation is nowhere close to the true stimulus orientation. We'll shortly write some code to change these weights so that the network does a better job of decoding.But to do so, we first need to define what we mean by "better". One simple way of defining this is to use the squared error\begin{equation} L = (y - \tilde{y})^2\end{equation}where $y$ is the network output and $\tilde{y}$ is the true stimulus orientation. When the decoded stimulus orientation is far from the true stimulus orientation, $L$ will be large. We thus refer to $L$ as the **loss function**, as it quantifies how *bad* the network is at decoding stimulus orientation.PyTorch actually carries with it a number of built-in loss functions. The one corresponding to the squared error is called `nn.MSELoss()`. This will take as arguments a **batch** of network outputs $y_1, y_2, \ldots, y_P$ and corresponding target outputs $\tilde{y}_1, \tilde{y}_2, \ldots, \tilde{y}_P$, and compute the **mean squared error (MSE)**\begin{equation} L = \frac{1}{P}\sum_{n=1}^P \left(y^{(n)} - \tilde{y}^{(n)}\right)^2\end{equation} Exercise 2: Computing MSE Evaluate the mean squared error for a deep network with $M=20$ rectified linear units, on the decoded orientations from neural responses to 20 random stimuli.
# Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize a deep network with M=20 hidden units net = DeepNetReLU(n_neurons, 20) # Get neural responses to first 20 stimuli in the data set r, ori = get_data(20, resp_train, stimuli_train) # Decode orientation from these neural responses out = net(r) ################################################### ## TO DO for students: evaluate mean squared error ################################################### # Initialize PyTorch mean squared error loss function (Hint: look at nn.MSELoss) loss_fn = ... # Evaluate mean squared error loss = ... # Uncomment once above is filled in # print('mean squared error: %.2f' % loss)
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D4_DeepLearning1/solutions/W3D4_Tutorial1_Solution_0e539ef5.py) You should see a mean squared error of 42943.75. --- Section 3.2: Optimization with gradient descentOur goal is now to modify the weights to make the mean squared error loss $L$ as small as possible over the whole data set. To do this, we'll use the **gradient descent (GD)** algorithm, which consists of iterating three steps:1. **Evaluate the loss** on the training data,```out = net(train_data)loss = loss_fn(out, train_labels)```where `train_data` are the network inputs in the training data (in our case, neural responses), and `train_labels` are the target outputs for each input (in our case, true stimulus orientations).2. **Compute the gradient of the loss** with respect to each of the network weights. In PyTorch, we can do this with one line of code:```loss.backward()```This command tells PyTorch to compute the gradients of the quantity stored in the variable `loss` with respect to each network parameter using [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation). These gradients are then stored behind the scenes (see appendix for more details).3. **Update the network weights** by descending the gradient. In Pytorch, we can do this using built-in optimizers. We'll use the `optim.SGD` optimizer (documentation [here](https://pytorch.org/docs/stable/optim.htmltorch.optim.SGD)) which updates parameters along the negative gradient, scaled by a learning rate (see appendix for details). To initialize this optimizer, we have to tell it * which parameters to update, and * what learning rate to use For example, to optimize *all* the parameters of a network `net` using a learning rate of .001, the optimizer would be initialized as follows ``` optimizer = optim.SGD(net.parameters(), lr=.001) ``` where `.parameters()` is a method of the `nn.Module` class that returns a [Python generator object](https://wiki.python.org/moin/Generators) over all the parameters of that `nn.Module` class (in our case, $\mathbf{W}^{in}, \mathbf{b}^{in}, \mathbf{W}^{out}, \mathbf{b}^{out}$). After computing all the parameter gradients in step 2, we can then update each of these parameters using the `.step()` method of this optimizer, ``` optimizer.step() ``` This single line of code will extract all the gradients computed with `.backward()` and execute the SGD updates for each parameter given to the optimizer. Note that this is true no matter how big/small the network is, allowing us to use the same two lines of code to perform the gradient descent updates for any deep network model built using PyTorch.Finally, an important detail to remember is that the gradients of each parameter need to be cleared before calling `.backward()`, or else PyTorch will try to accumulate gradients across iterations. This can again be done using built-in optimizers via the method `zero_grad()`, as follows:```optimizer.zero_grad()```Putting all this together, each iteration of the GD algorith will contain a block of code that looks something like this:```Get outputs from networkEvaluate loss Compute gradientsoptimizer.zero_grad() clear gradientsloss.backward() Update weightsoptimizer.step()```In the next exercise, we'll give you a code skeleton for implementing the GD algorithm. Your job will be to fill in the blanks.For the mathematical details of the GD algorithm, see the appendix. Note, in particular, that here we using the gradient descent algorithm, rather than the more commonly used *stochastic* gradient descent algorithm. See the appendix for a more detailed discussion of how these differ and when one might need to use the stochastic variant. Exercise 3: Gradient descent in PyTorchComplete the function `train()` that uses the gradient descent algorithm to optimize the weights of a given network. This function takes as input arguments* `net`: the PyTorch network whose weights to optimize* `loss_fn`: the PyTorch loss function to use to evaluate the loss* `train_data`: the training data to evaluate the loss on (i.e. neural responses to decode)* `train_labels`: the target outputs for each data point in `train_data` (i.e. true stimulus orientations)We will then train a neural network on our data and plot the loss (mean squared error) over time. When we run this function, behind the scenes PyTorch is actually changing the parameters inside this network to make the network better at decoding, so its weights will now be different than they were at initialization.**Hint:** all the code you need for doing this is provided in the above description of the GD algorithm.
def train(net, loss_fn, train_data, train_labels, n_iter=50, learning_rate=1e-4): """Run gradient descent to opimize parameters of a given network Args: net (nn.Module): PyTorch network whose parameters to optimize loss_fn: built-in PyTorch loss function to minimize train_data (torch.Tensor): n_train x n_neurons tensor with neural responses to train on train_labels (torch.Tensor): n_train x 1 tensor with orientations of the stimuli corresponding to each row of train_data, in radians n_iter (int): number of iterations of gradient descent to run learning_rate (float): learning rate to use for gradient descent Returns: (list): training loss over iterations """ # Initialize PyTorch SGD optimizer optimizer = optim.SGD(net.parameters(), lr=learning_rate) # Placeholder to save the loss at each iteration track_loss = [] # Loop over epochs (cf. appendix) for i in range(n_iter): ###################################################################### ## TO DO for students: fill in missing code for GD iteration raise NotImplementedError("Student exercise: write code for GD iterations") ###################################################################### # Evaluate loss using loss_fn out = ... # compute network output from inputs in train_data loss = ... # evaluate loss function # Compute gradients ... # Update weights ... # Store current value of loss track_loss.append(loss.item()) # .item() needed to transform the tensor output of loss_fn to a scalar # Track progress if (i + 1) % (n_iter // 5) == 0: print(f'iteration {i + 1}/{n_iter} | loss: {loss.item():.3f}') return track_loss # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize network net = DeepNetReLU(n_neurons, 20) # Initialize built-in PyTorch MSE loss function loss_fn = nn.MSELoss() # Run GD on data #train_loss = train(net, loss_fn, resp_train, stimuli_train) # Plot the training loss over iterations of GD #plt.plot(train_loss) plt.xlim([0, None]) plt.ylim([0, None]) plt.xlabel('iterations of gradient descent') plt.ylabel('mean squared error') plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D4_DeepLearning1/solutions/W3D4_Tutorial1_Solution_8f827dbe.py)*Example output:* --- Section 4: Evaluating model performance Section 4.1: Generalization performance with test dataNote that gradient descent is essentially an algorithm for fitting the network's parameters to a given set of training data. Selecting this training data is thus crucial for ensuring that the optimized parameters **generalize** to unseen data they weren't trained on. In our case, for example, we want to make sure that our trained network is good at decoding stimulus orientations from neural responses to any orientation, not just those in our data set.To ensure this, we have split up the full data set into a **training set** and a **testing set**. In Exercise 3, we trained a deep network by optimizing the parameters on a training set. We will now evaluate how good the optimized parameters are by using the trained network to decode stimulus orientations from neural responses in the testing set. Good decoding performance on this testing set should then be indicative of good decoding performance on the neurons' responses to any other stimulus orientation. This procedure is commonly used in machine learning (not just in deep learning)and is typically referred to as **cross-validation**.We will compute the MSE on the test data and plot the decoded stimulus orientations as a function of the true stimulus.
#@title #@markdown Execute this cell to evaluate and plot test error out = net(resp_test) # decode stimulus orientation for neural responses in testing set ori = stimuli_test # true stimulus orientations test_loss = loss_fn(out, ori) # MSE on testing set (Hint: use loss_fn initialized in previous exercise) plt.plot(ori, out.detach(), '.') # N.B. need to use .detach() to pass network output into plt.plot() identityLine() # draw the identity line y=x; deviations from this indicate bad decoding! plt.title('MSE on testing set: %.2f' % test_loss.item()) # N.B. need to use .item() to turn test_loss into a scalar plt.xlabel('true stimulus orientation ($^o$)') plt.ylabel('decoded stimulus orientation ($^o$)') axticks = np.linspace(0, 360, 5) plt.xticks(axticks) plt.yticks(axticks) plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
**PyTorch Note**:An important thing to note in the code snippet for plotting the decoded orientations is the `.detach()` method. The PyTorch `nn.Module` class is special in that, behind the scenes, each of the variables inside it are linked to each other in a computational graph, for the purposes of automatic differentiation (the algorithm used in `.backward()` to compute gradients). As a result, if you want to do anything that is not a `torch` operation to the parameters or outputs of an `nn.Module` class, you'll need to first "detach" it from its computational graph. This is what the `.detach()` method does. In this hidden code above, we need to call it on the outputs of the network so that we can plot them with the `plt.plot()` function. --- (Bonus) Section 4.2: Model criticismPlease move to the Summary and visit this section only if you have time after completing all non-bonus material! Let's now take a step back and think about how our model is succeeding/failing and how to improve it.
#@title #@markdown Execute this cell to plot decoding error out = net(resp_test) # decode stimulus orientation for neural responses in testing set ori = stimuli_test # true stimulus orientations error = out - ori # decoding error plt.plot(ori, error.detach(), '.') # plot decoding error as a function of true orientation (make sure all arguments to plt.plot() have been detached from PyTorch network!) # Plotting plt.xlabel('true stimulus orientation ($^o$)') plt.ylabel('decoding error ($^o$)') plt.xticks(np.linspace(0, 360, 5)) plt.yticks(np.linspace(-360, 360, 9)) plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
Think In the cell below, we will plot the *decoding error* for each neural response in the testing set. The decoding error is defined as the decoded stimulus orientation minus true stimulus orientation\begin{equation} \text{decoding error} = y^{(n)} - \tilde{y}^{(n)}\end{equation}In particular, we plot decoding error as a function of the true stimulus orientation. * Are some stimulus orientations harder to decode than others? * If so, in what sense? Are the decoded orientations for these stimuli more variable and/or are they biased? * Can you explain this variability/bias? What makes these stimulus orientations different from the others? * (Will be addressed in next exercise) Can you think of a way to modify the deep network in order to avoid this? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D4_DeepLearning1/solutions/W3D4_Tutorial1_Solution_3ccf8501.py) (Advanced Bonus) Exercise 4: Improving the loss function As illustrated in the previous exercise, the squared error is not a good loss function for circular quantities like angles, since two angles that are very close (e.g. $1^o$ and $359^o$) might actually have a very large squared error.Here, we'll avoid this problem by changing our loss function to treat our decoding problem as a **classification problem**. Rather than estimating the *exact* angle of the stimulus, we'll now aim to construct a decoder that classifies the stimulus into one of $C$ classes, corresponding to different bins of angles of width $b = \frac{360}{C}$. The true class $\tilde{y}^{(n)}$ of stimulus $i$ is now given by\begin{equation} \tilde{y}^{(n)} = \begin{cases} 1 &\text{if angle of stimulus $n$ is in the range } [0, b] \\ 2 &\text{if angle of stimulus $n$ is in the range } [b, 2b] \\ 3 &\text{if angle of stimulus $n$ is in the range } [2b, 3b] \\ \vdots \\ C &\text{if angle of stimulus $n$ is in the range } [(C-1)b, 360] \end{cases}\end{equation}We have a helper function `stimulus_class` that will extract `n_classes` stimulus classes for us from the stimulus orientations. To decode the stimulus class from neural responses, we'll use a deep network that outputs a $C$-dimensional vector of probabilities $\mathbf{p} = \begin{bmatrix} p_1, p_2, \ldots, p_C \end{bmatrix}^T$, corresponding to the estimated probabilities of the stimulus belonging to each class $1, 2, \ldots, C$. To ensure the network's outputs are indeed probabilities (i.e. they are positive numbers between 0 and 1, and sum to 1), we'll use a [softmax function](https://en.wikipedia.org/wiki/Softmax_function) to transform the real-valued outputs from the hidden layer into probabilities. Letting $\sigma(\cdot)$ denote this softmax function, the equations describing our network are\begin{align} \mathbf{h}^{(n)} &= \phi(\mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in}), && [\mathbf{W}^{in}: M \times N], \\ \mathbf{p}^{(n)} &= \sigma(\mathbf{W}^{out} \mathbf{h}^{(n)} + \mathbf{b}^{out}), && [\mathbf{W}^{out}: C \times M],\end{align}The decoded stimulus class is then given by that assigned the highest probability by the network:\begin{equation} y^{(n)} = \underset{i}{\arg\max} \,\, p_i\end{equation}The softmax function can be implemented in PyTorch simply using `torch.softmax()`.Often *log* probabilities are easier to work with than actual probabilities, because probabilities tend to be very small numbers that computers have trouble representing. We'll therefore actually use the logarithm of the softmax as the output of our network,\begin{equation} \mathbf{l}^{(n)} = \log \left( \mathbf{p}^{(n)} \right)\end{equation}which can implemented in PyTorch together with the softmax via an `nn.LogSoftmax` layer. The nice thing about the logarithmic function is that it's *monotonic*, so if one probability is larger/smaller than another, then its logarithm is also larger/smaller than the other's. We therefore have that\begin{equation} y^{(n)} = \underset{i}{\arg\max} \,\, p_i^{(n)} = \underset{i}{\arg\max} \, \log p_i^{(n)} = \underset{i}{\arg\max} \,\, l_i^{(n)}\end{equation}See the next cell for code for constructing a deep network with one hidden layer that of ReLU's that outputs a vector of log probabilities.
# Deep network for classification class DeepNetSoftmax(nn.Module): """Deep Network with one hidden layer, for classification Args: n_inputs (int): number of input units n_hidden (int): number of units in hidden layer n_classes (int): number of outputs, i.e. number of classes to output probabilities for Attributes: in_layer (nn.Linear): weights and biases of input layer out_layer (nn.Linear): weights and biases of output layer """ def __init__(self, n_inputs, n_hidden, n_classes): super().__init__() # needed to invoke the properties of the parent class nn.Module self.in_layer = nn.Linear(n_inputs, n_hidden) # neural activity --> hidden units self.out_layer = nn.Linear(n_hidden, n_classes) # hidden units --> outputs self.logprob = nn.LogSoftmax(dim=1) # probabilities across columns should sum to 1 (each output row corresponds to a different input) def forward(self, r): """Predict stimulus orientation bin from neural responses Args: r (torch.Tensor): n_stimuli x n_inputs tensor with neural responses to n_stimuli Returns: torch.Tensor: n_stimuli x n_classes tensor with predicted class probabilities """ h = torch.relu(self.in_layer(r)) logp = self.logprob(self.out_layer(h)) return logp
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
What should our loss function now be? Ideally, we want the probabilities outputted by our network to be such that the probability of the true stimulus class is high. One way to formalize this is to say that we want to maximize the *log* probability of the true stimulus class $\tilde{y}^{(n)}$ under the class probabilities predicted by the network,\begin{equation} \log \left( \text{predicted probability of stimulus } n \text{ being of class } \tilde{y}^{(n)} \right) = \log p^{(n)}_{\tilde{y}^{(n)}} = l^{(n)}_{\tilde{y}^{(n)}}\end{equation}To turn this into a loss function to be *minimized*, we can then simply multiply it by -1: maximizing the log probability is the same as minimizing the *negative* log probability. Summing over a batch of $P$ inputs, our loss function is then given by\begin{equation} L = -\sum_{n=1}^P \log p^{(n)}_{\tilde{y}^{(n)}} = -\sum_{n=1}^P l^{(n)}_{\tilde{y}^{(n)}}\end{equation}In the deep learning community, this loss function is typically referred to as the **cross-entropy**, or **negative log likelihood**. The corresponding built-in loss function in PyTorch is `nn.NLLLoss()` (documentation [here](https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.html)).In the next cell, we've provided most of the code to train and test a network to decode stimulus orientations via classification, by minimizing the negative log likelihood. Fill in the missing pieces.Once you've done this, have a look at the plotted results. Does changing the loss function from mean squared error to a classification loss solve our problems? Note that errors may still occur -- but are these errors as bad as the ones that our network above was making?
def decode_orientation(n_classes, train_data, train_labels, test_data, test_labels): """ Initialize, train, and test deep network to decode binned orientation from neural responses Args: n_classes (scalar): number of classes in which to bin orientation train_data (torch.Tensor): n_train x n_neurons tensor with neural responses to train on train_labels (torch.Tensor): n_train x 1 tensor with orientations of the stimuli corresponding to each row of train_data, in radians test_data (torch.Tensor): n_test x n_neurons tensor with neural responses to train on test_labels (torch.Tensor): n_test x 1 tensor with orientations of the stimuli corresponding to each row of train_data, in radians Returns: (list, torch.Tensor): training loss over iterations, n_test x 1 tensor with predicted orientations of the stimuli from decoding neural network """ # Bin stimulus orientations in training set train_binned_labels = stimulus_class(train_labels, n_classes) ############################################################################## ## TODO for students: fill out missing pieces below to initialize, train, and # test network # Fill out function and remove raise NotImplementedError("Student exercise: complete decode_orientation function") ############################################################################## # Initialize network net = ... # use M=20 hidden units # Initialize built-in PyTorch MSE loss function loss_fn = nn.NLLLoss() # Run GD on training set data, using learning rate of 0.1 train_loss = ... # Decode neural responses in testing set data out = ... out_labels = np.argmax(out.detach(), axis=1) # predicted classes return train_loss, out_labels # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) n_classes = 12 # start with 12, then (bonus) try making this as big as possible! does decoding get worse? # Uncomment below to test your function # Initialize, train, and test network #train_loss, predicted_test_labels = decode_orientation(n_classes, resp_train, stimuli_train, resp_test, stimuli_test) # Plot results #plot_decoded_results(train_loss, stimuli_test, predicted_test_labels)
_____no_output_____
CC-BY-4.0
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial1.ipynb
florgf88/course-content
Load data
from sensor import create_raw_data_file create_raw_data_file() # Read all data from parquet file data = pd.read_parquet("raw_data_all.parquet") # For simplicity, select sensor 3 data = data[data["sensor"] == "node_03"] # Replace 0-measurements with missing data.loc[data["Leq"] == 0, "Leq"] = None # For simplicity, downsample to 10 minutes data = data.resample("1min").median() # Forward-fill missing values data = data.fillna(method="ffill") # Add some extra columns data["hour"] = data.index.hour data["dow"] = data.index.dayofweek data["workday"] = (data.index.dayofweek < 5).astype(int) data["doy"] = data.index.dayofyear data["week"] = data.index.week data["workhour"] = data["hour"].isin(range(6,21))*data["hour"] data.head() fig = px.line(data, y="Leq", title=f"Raw data resampled to {data.index.freq.n} minutes", color="week") fig.show() decomposed = sm.tsa.seasonal_decompose(data["Leq"], period=pd.Timedelta("24hours") // data.index.freq) fig = decomposed.plot() fig.set_size_inches(10,10) decomposed = sm.tsa.seasonal_decompose(data["Leq"], period=pd.Timedelta("1W") // data.index.freq) fig = decomposed.plot() fig.set_size_inches(10,10) # Split in training and test train = data[data["week"].isin([7, 8, 9, 10, 11])] test = data[data["week"].isin([12, 13, 14, 15])] train_test = pd.concat([train, test]) train_test.loc[train.index, "dataset"] = "train" train_test.loc[test.index, "dataset"] = "test"
_____no_output_____
CC-BY-4.0
martin/TimeSeriesModelling.ipynb
marhoy/Koopen
Linear model
model_formula = "C(dow) + C(hour):C(dow)" # week" # model_formula = "C(workday) + C(workhour):C(workday)" linmodel = smf.ols(formula=f"Leq ~ {model_formula}", data=train).fit() linmodel_resid = train_test["Leq"] - linmodel.predict(train_test) linmodel.summary() fig = make_subplots(rows=2, cols=1, shared_xaxes=True) fig.add_trace(go.Scatter(x=train.index, y=train["Leq"], name="Train"), row=1, col=1) fig.add_trace(go.Scatter(x=test.index, y=test["Leq"], name="Test"), row=1, col=1) fig.add_trace(go.Scatter(x=train_test.index, y=linmodel.predict(train_test), name="Model"), row=1, col=1) fig.add_trace(go.Scatter(x=train_test.index, y=linmodel_resid, name="Residual"), row=2, col=1) fig.update_layout( title="Linear model results", width=1000, height=800, hovermode="x" ) linmodel_resid.groupby(train_test["dataset"]).apply(rms)
_____no_output_____
CC-BY-4.0
martin/TimeSeriesModelling.ipynb
marhoy/Koopen
ARX model
fig, ax = plt.subplots(2, 1, figsize=(10, 10)) fig = sm.graphics.tsa.plot_acf(linmodel.resid, lags=30, ax=ax[0]) fig = sm.graphics.tsa.plot_pacf(linmodel.resid, lags=30, ax=ax[1]) exog_train = patsy.dmatrix(model_formula, train) exog_test = patsy.dmatrix(model_formula, test) lags = math.ceil(pd.Timedelta("4min") / train.index.freq) lags arxmodel = AutoReg(endog=train["Leq"], lags=lags, exog=exog_train).fit() arxmodel_pred = pd.concat([ arxmodel.predict(), arxmodel.predict(start=test.index[0], end=test.index[-1], exog_oos=exog_test) ]) arxmodel_resid = train_test["Leq"] - arxmodel_pred fig = make_subplots(rows=2, cols=1, shared_xaxes=True) fig.add_trace(go.Scatter(x=train.index, y=train["Leq"], name="Train"), row=1, col=1) fig.add_trace(go.Scatter(x=test.index, y=test["Leq"], name="Test"), row=1, col=1) fig.add_trace(go.Scatter(x=train_test.index, y=arxmodel_pred, name="Model"), row=1, col=1) fig.add_trace(go.Scatter(x=train_test.index, y=arxmodel_resid, name="Residual"), row=2, col=1) fig.update_layout( title="ARX model results", width=1000, height=800, hovermode="x" ) arxmodel_resid.groupby(train_test["dataset"]).apply(rms)
_____no_output_____
CC-BY-4.0
martin/TimeSeriesModelling.ipynb
marhoy/Koopen
Model comparison
fig = make_subplots(rows=1, cols=1, shared_xaxes=True) fig.add_trace(go.Scatter(x=train_test.index, y=train_test["Leq"], name="Measured"), row=1, col=1) fig.add_trace(go.Scatter(x=train_test.index, y=linmodel.predict(train_test), name="LinModel"), row=1, col=1) fig.add_trace(go.Scatter(x=train_test.index, y=arxmodel_pred, name="ARXModel"), row=1, col=1) fig.update_layout( title="Model comparison", hovermode="x" )
_____no_output_____
CC-BY-4.0
martin/TimeSeriesModelling.ipynb
marhoy/Koopen
ARX with dynamic forecasting
exog = patsy.dmatrix(model_formula, train_test) model = AutoReg(endog=train_test["Leq"], lags=lags, exog=exog).fit() forecast_period = pd.Timedelta("3hour") t = train_test.index[0] + forecast_period preds = pd.Series(dtype=float) while t < train_test.index[-1] - forecast_period: preds = preds.append(model.predict(start=t, end=t + forecast_period, dynamic=t)) t += forecast_period fig = make_subplots(rows=1, cols=1, shared_xaxes=True) fig.add_trace(go.Scatter(x=train_test.index, y=train_test["Leq"], name="Measured"), row=1, col=1) fig.add_trace(go.Scatter(x=preds.index, y=preds, name="ARX Forecast"), row=1, col=1) fig.update_layout( title="Dynamic ARX forecasting", hovermode="x" ) resid = train_test["Leq"] - preds rms(resid)
_____no_output_____
CC-BY-4.0
martin/TimeSeriesModelling.ipynb
marhoy/Koopen
Python - Writing Your First Python Code! Welcome! This notebook will teach you the basics of the Python programming language. Although the information presented here is quite basic, it is an important foundation that will help you read and write Python code. By the end of this notebook, you'll know the basics of Python, including how to write basic commands, understand some basic types, and how to perform simple operations on them. Table of Contents Say "Hello" to the world in Python What version of Python are we using? Writing comments in Python Errors in Python Does Python know about your error before it runs your code? Exercise: Your First Program Types of objects in Python Integers Floats Converting from one object type to a different object type Boolean data type Exercise: Types Expressions and Variables Expressions Exercise: Expressions Variables Exercise: Expression and Variables in Python Estimated time needed: 25 min Say "Hello" to the world in Python When learning a new programming language, it is customary to start with an "hello world" example. As simple as it is, this one line of code will ensure that we know how to print a string in output and how to execute code within cells in a notebook. [Tip]: To execute the Python code in the code cell below, click on the cell to select it and press Shift + Enter.
# Try your first Python output print('Hello, Python!')
Hello, Python!
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
After executing the cell above, you should see that Python prints Hello, Python!. Congratulations on running your first Python code! [Tip:] print() is a function. You passed the string 'Hello, Python!' as an argument to instruct Python on what to print. What version of Python are we using? There are two popular versions of the Python programming language in use today: Python 2 and Python 3. The Python community has decided to move on from Python 2 to Python 3, and many popular libraries have announced that they will no longer support Python 2. Since Python 3 is the future, in this course we will be using it exclusively. How do we know that our notebook is executed by a Python 3 runtime? We can look in the top-right hand corner of this notebook and see "Python 3". We can also ask directly Python and obtain a detailed answer. Try executing the following code:
# Check the Python Version import sys print(sys.version)
3.6.6 | packaged by conda-forge | (default, Oct 12 2018, 14:43:46) [GCC 7.3.0]
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
[Tip:] sys is a built-in module that contains many system-specific parameters and functions, including the Python version in use. Before using it, we must explictly import it. Writing comments in Python In addition to writing code, note that it's always a good idea to add comments to your code. It will help others understand what you were trying to accomplish (the reason why you wrote a given snippet of code). Not only does this help other people understand your code, it can also serve as a reminder to you when you come back to it weeks or months later. To write comments in Python, use the number symbol before writing your comment. When you run your code, Python will ignore everything past the on a given line.
# Practice on writing comments print('Hello, Python!') # This line prints a string # print('Hi')
Hello, Python!
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
After executing the cell above, you should notice that This line prints a string did not appear in the output, because it was a comment (and thus ignored by Python). The second line was also not executed because print('Hi') was preceded by the number sign () as well! Since this isn't an explanatory comment from the programmer, but an actual line of code, we might say that the programmer commented out that second line of code. Errors in Python Everyone makes mistakes. For many types of mistakes, Python will tell you that you have made a mistake by giving you an error message. It is important to read error messages carefully to really understand where you made a mistake and how you may go about correcting it.For example, if you spell print as frint, Python will display an error message. Give it a try:
# Print string as error message frint("Hello, Python!")
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
The error message tells you: where the error occurred (more useful in large notebook cells or scripts), and what kind of error it was (NameError) Here, Python attempted to run the function frint, but could not determine what frint is since it's not a built-in function and it has not been previously defined by us either. You'll notice that if we make a different type of mistake, by forgetting to close the string, we'll obtain a different error (i.e., a SyntaxError). Try it below:
# Try to see build in error message print("Hello, Python!)
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Does Python know about your error before it runs your code? Python is what is called an interpreted language. Compiled languages examine your entire program at compile time, and are able to warn you about a whole class of errors prior to execution. In contrast, Python interprets your script line by line as it executes it. Python will stop executing the entire program when it encounters an error (unless the error is expected and handled by the programmer, a more advanced subject that we'll cover later on in this course). Try to run the code in the cell below and see what happens:
# Print string and error to see the running order print("This will be printed") frint("This will cause an error") print("This will NOT be printed")
This will be printed
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Exercise: Your First Program Generations of programmers have started their coding careers by simply printing "Hello, world!". You will be following in their footsteps.In the code cell below, use the print() function to print out the phrase: Hello, world!
# Write your code below and press Shift+Enter to execute print("Hello, world!")
Hello, world!
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Double-click __here__ for the solution.<!-- Your answer is below:print("Hello, world!")--> Now, let's enhance your code with a comment. In the code cell below, print out the phrase: Hello, world! and comment it with the phrase Print the traditional hello world all in one line of code.
# Write your code below and press Shift+Enter to execute print("Hello, world!") # Print the traditional hello world
Hello, world!
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Double-click __here__ for the solution.<!-- Your answer is below:print("Hello, world!") Print the traditional hello world--> Types of objects in Python Python is an object-oriented language. There are many different types of objects in Python. Let's start with the most common object types: strings, integers and floats. Anytime you write words (text) in Python, you're using character strings (strings for short). The most common numbers, on the other hand, are integers (e.g. -1, 0, 100) and floats, which represent real numbers (e.g. 3.14, -42.0). The following code cells contain some examples.
# Integer 11 # Float 2.14 # String "Hello, Python 101!"
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
You can get Python to tell you the type of an expression by using the built-in type() function. You'll notice that Python refers to integers as int, floats as float, and character strings as str.
# Type of 12 type(12) # Type of 2.14 type(2.14) # Type of "Hello, Python 101!" type("Hello, Python 101!")
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
In the code cell below, use the type() function to check the object type of 12.0.
# Write your code below. Don't forget to press Shift+Enter to execute the cell type(12.0)
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Double-click __here__ for the solution.<!-- Your answer is below:type(12.0)--> Integers Here are some examples of integers. Integers can be negative or positive numbers: We can verify this is the case by using, you guessed it, the type() function:
# Print the type of -1 type(-1) # Print the type of 4 type(4) # Print the type of 0 type(0)
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Floats Floats represent real numbers; they are a superset of integer numbers but also include "numbers with decimals". There are some limitations when it comes to machines representing real numbers, but floating point numbers are a good representation in most cases. You can learn more about the specifics of floats for your runtime environment, by checking the value of sys.float_info. This will also tell you what's the largest and smallest number that can be represented with them.Once again, can test some examples with the type() function:
# Print the type of 1.0 type(1.0) # Notice that 1 is an int, and 1.0 is a float # Print the type of 0.5 type(0.5) # Print the type of 0.56 type(0.56) # System settings about float type sys.float_info
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Converting from one object type to a different object type You can change the type of the object in Python; this is called typecasting. For example, you can convert an integer into a float (e.g. 2 to 2.0).Let's try it:
# Verify that this is an integer type(2)
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Converting integers to floatsLet's cast integer 2 to float:
# Convert 2 to a float float(2) # Convert integer 2 to a float and check its type type(float(2))
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
When we convert an integer into a float, we don't really change the value (i.e., the significand) of the number. However, if we cast a float into an integer, we could potentially lose some information. For example, if we cast the float 1.1 to integer we will get 1 and lose the decimal information (i.e., 0.1):
# Casting 1.1 to integer will result in loss of information int(1.1)
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Converting from strings to integers or floats Sometimes, we can have a string that contains a number within it. If this is the case, we can cast that string that represents a number into an integer using int():
# Convert a string into an integer int('1')
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
But if you try to do so with a string that is not a perfect match for a number, you'll get an error. Try the following:
# Convert a string into an integer with error int('1 or 2 people')
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
You can also convert strings containing floating point numbers into float objects:
# Convert the string "1.2" into a float float('1.2')
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
[Tip:] Note that strings can be represented with single quotes ('1.2') or double quotes ("1.2"), but you can't mix both (e.g., "1.2'). Converting numbers to strings If we can convert strings to numbers, it is only natural to assume that we can convert numbers to strings, right?
# Convert an integer to a string str(1)
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
And there is no reason why we shouldn't be able to make floats into strings as well:
# Convert a float to a string str(1.2)
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Boolean data type Boolean is another important type in Python. An object of type Boolean can take on one of two values: True or False:
# Value true True
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Notice that the value True has an uppercase "T". The same is true for False (i.e. you must use the uppercase "F").
# Value false False
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
When you ask Python to display the type of a boolean object it will show bool which stands for boolean:
# Type of True type(True) # Type of False type(False)
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
We can cast boolean objects to other data types. If we cast a boolean with a value of True to an integer or float we will get a one. If we cast a boolean with a value of False to an integer or float we will get a zero. Similarly, if we cast a 1 to a Boolean, you get a True. And if we cast a 0 to a Boolean we will get a False. Let's give it a try:
# Convert True to int int(True) # Convert 1 to boolean bool(1) # Convert 0 to boolean bool(0) # Convert True to float float(True)
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Exercise: Types What is the data type of the result of: 6 / 2?
# Write your code below. Don't forget to press Shift+Enter to execute the cell type(6/2)
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Double-click __here__ for the solution.<!-- Your answer is below:type(6/2) float--> What is the type of the result of: 6 // 2? (Note the double slash //.)
# Write your code below. Don't forget to press Shift+Enter to execute the cell type(6//2)
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Double-click __here__ for the solution.<!-- Your answer is below:type(6//2) int, as the double slashes stand for integer division --> Expression and Variables Expressions Expressions in Python can include operations among compatible types (e.g., integers and floats). For example, basic arithmetic operations like adding multiple numbers:
# Addition operation expression 43 + 60 + 16 + 41
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
We can perform subtraction operations using the minus operator. In this case the result is a negative number:
# Subtraction operation expression 50 - 60
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
We can do multiplication using an asterisk:
# Multiplication operation expression 5 * 5
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
We can also perform division with the forward slash:
# Division operation expression 25 / 5 # Division operation expression 25 / 6
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
As seen in the quiz above, we can use the double slash for integer division, where the result is rounded to the nearest integer:
# Integer division operation expression 25 // 5 # Integer division operation expression 25 // 6
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Exercise: Expression Let's write an expression that calculates how many hours there are in 160 minutes:
# Write your code below. Don't forget to press Shift+Enter to execute the cell hours= 160/60
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Double-click __here__ for the solution.<!-- Your answer is below:160/60 Or 160//60--> Python follows well accepted mathematical conventions when evaluating mathematical expressions. In the following example, Python adds 30 to the result of the multiplication (i.e., 120).
# Mathematical expression 30 + 2 * 60
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
And just like mathematics, expressions enclosed in parentheses have priority. So the following multiplies 32 by 60.
# Mathematical expression (30 + 2) * 60
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Variables Just like with most programming languages, we can store values in variables, so we can use them later on. For example:
# Store value into variable x = 43 + 60 + 16 + 41
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
To see the value of x in a Notebook, we can simply place it on the last line of a cell:
# Print out the value in variable x
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
We can also perform operations on x and save the result to a new variable:
# Use another variable to store the result of the operation between variable and value y = x / 60 y
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
If we save a value to an existing variable, the new value will overwrite the previous value:
# Overwrite variable with new value x = x / 60 x
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
It's a good practice to use meaningful variable names, so you and others can read the code and understand it more easily:
# Name the variables meaningfully total_min = 43 + 42 + 57 # Total length of albums in minutes total_min # Name the variables meaningfully total_hours = total_min / 60 # Total length of albums in hours total_hours
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
In the cells above we added the length of three albums in minutes and stored it in total_min. We then divided it by 60 to calculate total length total_hours in hours. You can also do it all at once in a single expression, as long as you use parenthesis to add the albums length before you divide, as shown below.
# Complicate expression total_hours = (43 + 42 + 57) / 60 # Total hours in a single expression total_hours
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
If you'd rather have total hours as an integer, you can of course replace the floating point division with integer division (i.e., //). Exercise: Expression and Variables in Python What is the value of x where x = 3 + 2 * 2
# Write your code below. Don't forget to press Shift+Enter to execute the cell x = 3 + 2 * 2 print(x) x:8 print(x)
7 7
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Double-click __here__ for the solution.<!-- Your answer is below:7--> What is the value of y where y = (3 + 2) * 2?
# Write your code below. Don't forget to press Shift+Enter to execute the cell y = (3 + 2) * 2 y
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Double-click __here__ for the solution.<!-- Your answer is below:10--> What is the value of z where z = x + y?
# Write your code below. Don't forget to press Shift+Enter to execute the cell z = x + y z
_____no_output_____
MIT
Notebooks/PY0101EN-1-1-Types.ipynb
tibonobo/Coursera_IBM_DataScience
Lambda School Data Science, Unit 2: Predictive Modeling Applied Modeling, Module 2You will use your portfolio project dataset for all assignments this sprint. AssignmentComplete these tasks for your project, and document your work.- [ ] Plot the distribution of your target. - Regression problem: Is your target skewed? Then, log-transform it. - Classification: Are your classes imbalanced? Then, don't use just accuracy. And try `class_balance` parameter in scikit-learn.- [ ] Continue to clean and explore your data. Make exploratory visualizations.- [ ] Fit a model. Does it beat your baseline?- [ ] Share at least 1 visualization on Slack.You need to complete an initial model today, because the rest of the week, we're making model interpretation visualizations. Reading Today- [imbalance-learn](https://github.com/scikit-learn-contrib/imbalanced-learn)- [Learning from Imbalanced Classes](https://www.svds.com/tbt-learning-imbalanced-classes/)- [Machine Learning Meets Economics](http://blog.mldb.ai/blog/posts/2016/01/ml-meets-economics/)- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/) Yesterday- [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), by Google Research, with interactive visualizations. _"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score."_- [How Shopify Capital Uses Quantile Regression To Help Merchants Succeed](https://engineering.shopify.com/blogs/engineering/how-shopify-uses-machine-learning-to-help-our-merchants-grow-their-business)- [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050), **by Lambda DS3 student** Michael Brady. His blog post extends the Tanzania Waterpumps scenario, far beyond what's in the lecture notebook.- [Notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb)- [Simple guide to confusion matrix terminology](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) by Kevin Markham, with video- [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415)
import pandas as pd import seaborn as sns #import plotly.express as px %matplotlib inline import matplotlib.pyplot as plt from sklearn.metrics import accuracy_score from sklearn.metrics import mean_absolute_error,r2_score,mean_squared_error from sklearn.model_selection import train_test_split #import category_encoders as ce from sklearn.linear_model import LinearRegression from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline df = pd.read_csv('/content/openpowerlifting.csv') drops = ['Squat4Kg', 'Bench4Kg', 'Deadlift4Kg','Country','Place','Squat1Kg', 'Squat2Kg','Squat3Kg','Bench1Kg','Bench2Kg','Bench3Kg','Deadlift1Kg', 'Deadlift2Kg','Deadlift3Kg'] df = df.drop(columns=drops) df.dropna(inplace=True) df.shape X = df.drop(columns='Best3SquatKg') y = df['Best3SquatKg'] Xtrain, X_test, ytrain,y_test = train_test_split(X,y, test_size=0.40, random_state=42) X_train, X_val, y_train,y_val = train_test_split(Xtrain,ytrain, test_size=0.25, random_state=42) X_train.shape, X_test.shape, X_val.shape model = LinearRegression() features = ['Sex','Equipment','Age','BodyweightKg','Best3BenchKg','Best3DeadliftKg'] X = X_train[features].replace({'M':0,'F':1,'Raw':2,'Single-ply':3, 'Wraps':4,'Multi-ply':5}) y = y_train model.fit(X,y) y_pred = model.predict(X_val[features].replace({'M':0,'F':1,'Raw':2,'Single-ply':3, 'Wraps':4,'Multi-ply':5})) print('Validation Accuracy', r2_score(y_pred, y_val)) print('Mean Absolute Error:', mean_absolute_error(y_val, y_pred)) plt.scatter(y_val, y_pred) plt.show()
_____no_output_____
MIT
assignment_applied_modeling_2.ipynb
justin-hsieh/DS-Unit-2-Applied-Modeling