path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
papi/papi.ipynb | ###Markdown
My personal notes on PAPI and hardwarehttp://icl.cs.utk.edu/papi/
###Code
! hostnamectl
! lspci | grep NVIDIA
! lscpu | egrep 'Model name|Socket|Thread|NUMA|CPU\(s\)'
! echo "CPU threads: $(grep -c processor /proc/cpuinfo)"
! nproc --all
! cat /proc/meminfo
! lsblk
! module avail papi
%%bash
module load papi papi-devel
module display papi papi-devel
%%bash
module load papi papi-devel
papi_avail
###Output
Available PAPI preset and user defined events plus hardware information.
--------------------------------------------------------------------------------
PAPI Version : 5.5.1.0
Vendor string and code : GenuineIntel (1)
Model string and code : Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz (85)
CPU Revision : 4.000000
CPUID Info : Family: 6 Model: 85 Stepping: 4
CPU Max Megahertz : 2101
CPU Min Megahertz : 1000
Hdw Threads per core : 2
Cores per Socket : 22
Sockets : 2
NUMA Nodes : 2
CPUs per Node : 44
Total CPUs : 88
Running in a VM : no
Number Hardware Counters : 11
Max Multiplex Counters : 384
--------------------------------------------------------------------------------
================================================================================
PAPI Preset Events
================================================================================
Name Code Avail Deriv Description (Note)
PAPI_L1_DCM 0x80000000 Yes No Level 1 data cache misses
PAPI_L1_ICM 0x80000001 Yes No Level 1 instruction cache misses
PAPI_L2_DCM 0x80000002 Yes Yes Level 2 data cache misses
PAPI_L2_ICM 0x80000003 Yes No Level 2 instruction cache misses
PAPI_L3_DCM 0x80000004 No No Level 3 data cache misses
PAPI_L3_ICM 0x80000005 No No Level 3 instruction cache misses
PAPI_L1_TCM 0x80000006 Yes Yes Level 1 cache misses
PAPI_L2_TCM 0x80000007 Yes No Level 2 cache misses
PAPI_L3_TCM 0x80000008 Yes No Level 3 cache misses
PAPI_CA_SNP 0x80000009 Yes No Requests for a snoop
PAPI_CA_SHR 0x8000000a Yes No Requests for exclusive access to shared cache line
PAPI_CA_CLN 0x8000000b Yes No Requests for exclusive access to clean cache line
PAPI_CA_INV 0x8000000c No No Requests for cache line invalidation
PAPI_CA_ITV 0x8000000d Yes No Requests for cache line intervention
PAPI_L3_LDM 0x8000000e Yes No Level 3 load misses
PAPI_L3_STM 0x8000000f No No Level 3 store misses
PAPI_BRU_IDL 0x80000010 No No Cycles branch units are idle
PAPI_FXU_IDL 0x80000011 No No Cycles integer units are idle
PAPI_FPU_IDL 0x80000012 No No Cycles floating point units are idle
PAPI_LSU_IDL 0x80000013 No No Cycles load/store units are idle
PAPI_TLB_DM 0x80000014 Yes Yes Data translation lookaside buffer misses
PAPI_TLB_IM 0x80000015 Yes No Instruction translation lookaside buffer misses
PAPI_TLB_TL 0x80000016 No No Total translation lookaside buffer misses
PAPI_L1_LDM 0x80000017 Yes No Level 1 load misses
PAPI_L1_STM 0x80000018 Yes No Level 1 store misses
PAPI_L2_LDM 0x80000019 Yes No Level 2 load misses
PAPI_L2_STM 0x8000001a Yes No Level 2 store misses
PAPI_BTAC_M 0x8000001b No No Branch target address cache misses
PAPI_PRF_DM 0x8000001c Yes No Data prefetch cache misses
PAPI_L3_DCH 0x8000001d No No Level 3 data cache hits
PAPI_TLB_SD 0x8000001e No No Translation lookaside buffer shootdowns
PAPI_CSR_FAL 0x8000001f No No Failed store conditional instructions
PAPI_CSR_SUC 0x80000020 No No Successful store conditional instructions
PAPI_CSR_TOT 0x80000021 No No Total store conditional instructions
PAPI_MEM_SCY 0x80000022 No No Cycles Stalled Waiting for memory accesses
PAPI_MEM_RCY 0x80000023 No No Cycles Stalled Waiting for memory Reads
PAPI_MEM_WCY 0x80000024 Yes No Cycles Stalled Waiting for memory writes
PAPI_STL_ICY 0x80000025 Yes No Cycles with no instruction issue
PAPI_FUL_ICY 0x80000026 Yes Yes Cycles with maximum instruction issue
PAPI_STL_CCY 0x80000027 Yes No Cycles with no instructions completed
PAPI_FUL_CCY 0x80000028 Yes No Cycles with maximum instructions completed
PAPI_HW_INT 0x80000029 No No Hardware interrupts
PAPI_BR_UCN 0x8000002a Yes Yes Unconditional branch instructions
PAPI_BR_CN 0x8000002b Yes No Conditional branch instructions
PAPI_BR_TKN 0x8000002c Yes Yes Conditional branch instructions taken
PAPI_BR_NTK 0x8000002d Yes No Conditional branch instructions not taken
PAPI_BR_MSP 0x8000002e Yes No Conditional branch instructions mispredicted
PAPI_BR_PRC 0x8000002f Yes Yes Conditional branch instructions correctly predicted
PAPI_FMA_INS 0x80000030 No No FMA instructions completed
PAPI_TOT_IIS 0x80000031 No No Instructions issued
PAPI_TOT_INS 0x80000032 Yes No Instructions completed
PAPI_INT_INS 0x80000033 No No Integer instructions
PAPI_FP_INS 0x80000034 No No Floating point instructions
PAPI_LD_INS 0x80000035 Yes No Load instructions
PAPI_SR_INS 0x80000036 Yes No Store instructions
PAPI_BR_INS 0x80000037 Yes No Branch instructions
PAPI_VEC_INS 0x80000038 No No Vector/SIMD instructions (could include integer)
PAPI_RES_STL 0x80000039 Yes No Cycles stalled on any resource
PAPI_FP_STAL 0x8000003a No No Cycles the FP unit(s) are stalled
PAPI_TOT_CYC 0x8000003b Yes No Total cycles
PAPI_LST_INS 0x8000003c Yes Yes Load/store instructions completed
PAPI_SYC_INS 0x8000003d No No Synchronization instructions completed
PAPI_L1_DCH 0x8000003e No No Level 1 data cache hits
PAPI_L2_DCH 0x8000003f No No Level 2 data cache hits
PAPI_L1_DCA 0x80000040 No No Level 1 data cache accesses
PAPI_L2_DCA 0x80000041 Yes No Level 2 data cache accesses
PAPI_L3_DCA 0x80000042 Yes Yes Level 3 data cache accesses
PAPI_L1_DCR 0x80000043 No No Level 1 data cache reads
PAPI_L2_DCR 0x80000044 Yes No Level 2 data cache reads
PAPI_L3_DCR 0x80000045 Yes No Level 3 data cache reads
PAPI_L1_DCW 0x80000046 No No Level 1 data cache writes
PAPI_L2_DCW 0x80000047 Yes Yes Level 2 data cache writes
PAPI_L3_DCW 0x80000048 Yes No Level 3 data cache writes
PAPI_L1_ICH 0x80000049 No No Level 1 instruction cache hits
PAPI_L2_ICH 0x8000004a Yes No Level 2 instruction cache hits
PAPI_L3_ICH 0x8000004b No No Level 3 instruction cache hits
PAPI_L1_ICA 0x8000004c No No Level 1 instruction cache accesses
PAPI_L2_ICA 0x8000004d Yes No Level 2 instruction cache accesses
PAPI_L3_ICA 0x8000004e Yes No Level 3 instruction cache accesses
PAPI_L1_ICR 0x8000004f No No Level 1 instruction cache reads
PAPI_L2_ICR 0x80000050 Yes No Level 2 instruction cache reads
PAPI_L3_ICR 0x80000051 Yes No Level 3 instruction cache reads
PAPI_L1_ICW 0x80000052 No No Level 1 instruction cache writes
PAPI_L2_ICW 0x80000053 No No Level 2 instruction cache writes
PAPI_L3_ICW 0x80000054 No No Level 3 instruction cache writes
PAPI_L1_TCH 0x80000055 No No Level 1 total cache hits
PAPI_L2_TCH 0x80000056 No No Level 2 total cache hits
PAPI_L3_TCH 0x80000057 No No Level 3 total cache hits
PAPI_L1_TCA 0x80000058 No No Level 1 total cache accesses
PAPI_L2_TCA 0x80000059 Yes Yes Level 2 total cache accesses
PAPI_L3_TCA 0x8000005a Yes No Level 3 total cache accesses
PAPI_L1_TCR 0x8000005b No No Level 1 total cache reads
PAPI_L2_TCR 0x8000005c Yes Yes Level 2 total cache reads
PAPI_L3_TCR 0x8000005d Yes Yes Level 3 total cache reads
PAPI_L1_TCW 0x8000005e No No Level 1 total cache writes
PAPI_L2_TCW 0x8000005f Yes Yes Level 2 total cache writes
PAPI_L3_TCW 0x80000060 Yes No Level 3 total cache writes
PAPI_FML_INS 0x80000061 No No Floating point multiply instructions
PAPI_FAD_INS 0x80000062 No No Floating point add instructions
PAPI_FDV_INS 0x80000063 No No Floating point divide instructions
PAPI_FSQ_INS 0x80000064 No No Floating point square root instructions
PAPI_FNV_INS 0x80000065 No No Floating point inverse instructions
PAPI_FP_OPS 0x80000066 No No Floating point operations
PAPI_SP_OPS 0x80000067 Yes Yes Floating point operations; optimized to count scaled single precision vector operations
PAPI_DP_OPS 0x80000068 Yes Yes Floating point operations; optimized to count scaled double precision vector operations
PAPI_VEC_SP 0x80000069 Yes Yes Single precision vector/SIMD instructions
PAPI_VEC_DP 0x8000006a Yes Yes Double precision vector/SIMD instructions
PAPI_REF_CYC 0x8000006b Yes No Reference clock cycles
--------------------------------------------------------------------------------
Of 108 possible events, 59 are available, of which 18 are derived.
avail.c PASSED
###Markdown
Show only Avail
###Code
%%bash
module load papi papi-devel
papi_avail | egrep 'Deriv|Yes'
%%bash
module load papi papi-devel
papi_component_avail
%%bash
gfortran --version
%%bash
module load papi papi-devel
echo $CPATH
echo $LIBRARY_PATH
! ls /opt/bullxde/perftools/papi/5.5.1.0/include
! ls /opt/bullxde/perftools/papi/5.5.1.0/lib64
###Output
libpapi.a libpapi.so.5 libpapi.so.5.5.1.0 libpfm.so libpfm.so.4.8.0
libpapi.so libpapi.so.5.5.1 libpfm.a libpfm.so.4 pkgconfig
###Markdown
TestTesting one small code only to see if works
###Code
%%writefile test01.f90
!-----------------------------------------------------------------------
program test01
implicit none
include 'f90papi.h'
integer, parameter :: N = 512
double precision, dimension(N, N) :: a, b
double precision :: t1, t2, rate
integer :: i, j
integer, parameter :: max_event = 3
integer, dimension(max_event) :: event
integer(kind=8), dimension(max_event) :: values
integer :: retval
event(1) = PAPI_LD_INS
event(2) = PAPI_SR_INS
!event(x) = PAPI_L1_TCM
!event(x) = PAPI_L2_TCM
event(3) = PAPI_L3_TCM
call init01(a, b, N) ! init matrix cels
call PAPIF_start_counters (event, max_event, retval)
if (retval /= PAPI_OK) then
call PAPIF_perror('PAPIF_start_counters')
stop
endif
call cpu_time(t1) ! CPU elapsed time in seconds
do j = 1, N ! transpose the matrix
do i = 1, N
b(i, j) = a(j, i)
enddo
enddo
call cpu_time(t2) ! CPU elapsed time in seconds
! Read out PAPI counters
call PAPIF_read_counters(values, max_event, retval)
if (retval /= PAPI_OK) then
call PAPIF_perror('PAPIF_read_counters')
stop
endif
call check01(a, b, N) ! check the transpose
! Print Timings
print*, 'PAPI_LD_INS', values(1)
print*, 'PAPI_SR_INS', values(2)
!print*, 'PAPI_L1_TCM', values(x)
!print*, 'PAPI_L2_TCM', values(x)
print*, 'PAPI_L3_TCM', values(3)
rate = 2 * N * N / (1024 * 1024 * (t2 - t1))
print '(a, i0, a, f10.6, a, f6.1, a)', &
"N=", N, ", T=", t2 - t1, " s, Rate=", rate, " MB/s"
contains
subroutine init01(a, b, N)
implicit none
integer, intent(in) :: N
double precision, intent(inout) :: a(N, N), b(N, N)
integer :: i, j
do i = 1, N
do j = 1, N
a(i, j) = 1.0
b(i, j) = 0.0
enddo
enddo
end subroutine
subroutine check01(a, b, N)
implicit none
integer, intent(in) :: N
double precision, intent(in) :: a(N, N), b(N, N)
integer :: i, j
do i = 1, N
do j = 1, N
if ( a(i, j) /= b(i, j) ) then
print *, "Error: ", i, j
endif
enddo
enddo
end subroutine
end program
%%bash
module load papi papi-devel
gfortran -lpapi -o test01 test01.f90 \
-I /opt/bullxde/perftools/papi/5.5.1.0/include
%%bash
module load papi papi-devel
./test01
###Output
PAPI_LD_INS 1838017
PAPI_SR_INS 525961
PAPI_L3_TCM 1379
N=512, T= 0.001251 s, Rate= 399.7 MB/s
|
tutorials/W1D1_BasicsAndPytorch/student/W1D1_Tutorial1.ipynb | ###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Deepak Raya, Siwei Bai, Kelson Shilling-Scrivo__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting. In some tutorials, you will notice that we install some dependencies even if they are preinstalled on google colab or kaggle. This happens because we have added automation to our repository through [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions).Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D1_T1','https://portal.neuromatchacademy.org/api/redirect/to/97e94a29-0b3a-4e16-9a8d-f6838a5bd83d')
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
# TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20, 21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20, 21)} (shape: {torch.ones(20, 21).shape})")
if not np.array_equal( B.numpy(),np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device='cpu'):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
if device == 'cpu':
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
else:
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Google Colab users***Scratch Code Cells*If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course*Time estimate: ~25mins*
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing
atform.add_event('Video 1: Welcome and History')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 2: Why DL is cool')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch*Time estimate: ~2 hours 05 mins* PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 3: Making Tensors')
display(out)
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Deepak Raya, Siwei Bai, Kelson Shilling-Scrivo__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting. In some tutorials, you will notice that we install some dependencies even if they are preinstalled on google colab or kaggle. This happens because we have added automation to our repository through [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions).Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D1_T1','https://portal.neuromatchacademy.org/api/redirect/to/97e94a29-0b3a-4e16-9a8d-f6838a5bd83d')
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
# TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20, 21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20, 21)} (shape: {torch.ones(20, 21).shape})")
if not np.array_equal( B.numpy(),np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device='cpu'):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
print(f"time taken for {iterations} iterations of {f.__name__}({dim}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Google Colab users***Scratch Code Cells*If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course*Time estimate: ~25mins*
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing
atform.add_event('Video 1: Welcome and History')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 2: Why DL is cool')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch*Time estimate: ~2 hours 05 mins* PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 3: Making Tensors')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print(f"Tensor x: {x}")
print(f"Tensor y: {y}")
print(f"Tensor z: {z}")
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
print(f"Tensor d: {d}")
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `True`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the cell multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# add timing to airtable
atform.add_event('Coding Exercise 2.1: Creating Tensors')
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ad4f6c0f.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 4: Tensor Operators')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.t```. Note the lack of brackets for ```Tensor.t``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-simple_operations')
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_5562ea1d.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-dot_product')
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_00491ea4.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 5: Tensor Indexing')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easily mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see ```.transpose()``` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar, e.g.,:$ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix} \,$and$ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.,:$ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.,:$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix} \,$and $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix} \, $so$ \, Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$and$ \, E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix} \,$so$ \, Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** `torch.numel()` is an easy way of finding the number of elements in a tensor
###Code
def functionA(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`
and returns the column sum of
`my_tensor1` multiplied by the sum of all the elmements of `my_tensor2`,
i.e., a scalar.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `my_tensor1` by the sum of
`my_tensor2`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(my_tensor):
"""
This function takes in a square matrix `my_tensor` and returns a 2D tensor
consisting of a flattened `my_tensor` with the index of each element
appended to this tensor in the row dimension.
Args:
my_tensor: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor `my_tensor`
my_tensor = ...
# TODO create the idx tensor to be concatenated to `my_tensor`
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`.
If the dimensions allow it, it returns the
elementwise sum of `my_tensor1`-shaped `my_tensor2`, and `my_tensor2`;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape `my_tensor2` into the shape of `my_tensor1`
if ...:
# TODO reshape `my_tensor2` into the shape of `my_tensor1`
my_tensor2 = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
my_tensor1 = ...
my_tensor2 = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
# add timing to airtable
atform.add_event('Coding Exercise 2.3: Manipulating Tensors')
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ea1718cb.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([ 1, -1, -1, 3, 2, 3, 0])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 6: GPU vs CPU')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Here, we define the function `set_device`, which returns the device use in the notebook, i.e., `cpu` or `cuda`. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as```pythonDEVICE = set_device()```Let's define the function using the PyTorch package `torch.cuda`, which is lazily initialized, so we can always import it, and use `is_available()` to determine if our system supports CUDA.
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
###Output
_____no_output_____
###Markdown
Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the `.to()` method as before, or the `.cpu()` and `.cuda()` methods. Note that using the `.cuda()` will throw an error if CUDA is not enabled in your machine.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call.
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function. Complete the second function, such that it is performs the same operations as the first function, but entirely on the GPU. We will use the helper function `timeFun(f, dim, iterations, device)`.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda:0"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
x = ...
y = ...
z = ...
x = ...
y = ...
del x
del y
del z
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(f=simpleFun, dim=dim, iterations=iterations)
# timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_032dcba8.py) Sample output (depends on your hardware)```time taken for 1 iterations of simpleFun(10000): 28.50481time taken for 1 iterations of simpleFunGPU(10000): 0.91102``` **Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 7: Getting Data')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples: {len(cifar10_data)}")
print(f"Class names: {cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print(f"Label: {cifar10_data.classes[label]}")
print(f"Image size: {image.shape}")
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
# plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_b04bd357.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 8: Train and Test')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 9: Data Augmentation - Transformations')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` **Note:** For the `seed_worker` to have an effect, `num_workers` should be 2 or more. We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
plt.show()
return data
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# data = my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_6052d728.py)*Example output:* --- Section 3: Neural Networks*Time estimate: ~1 hour 30 mins (excluding movie)* Now it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 10: CSV Files')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in various formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
plt.show()
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 11: Generating the Neural Network')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:`__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.`forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.`predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.`train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:\n", X_samples)
## Do a forward pass of the network
# output = ...
# print("\nNetwork output:\n", output)
## Predict the label of each point
# y_predicted = ...
# print("\nPredicted labels:\n", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_af8ae0ff.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[ 0.1543, -0.8018], [ 2.2077, -2.9859], [-0.5745, -0.0195], [ 0.1924, -0.8367], [ 0.1818, -0.8301]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 1, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 12: Train the Network')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we wil cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 13: Play with it')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 14: XOR Widget')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{array}{ccc}X & Y & \text{XOR} \\\hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\\end{array}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: Ethics And Course Info
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the [visualization of ICLR papers](https://iclr.cc/virtual/2021/paper_vis.html). Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? --- Submit to Airtable
###Code
# @title Video 18: Submission info
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This is Darryl, the Deep Learning Dapper Lion, and he's here to teach you about content submission to airtable. At the end of each tutorial there will be an Airtable Submission Cell. Run the cell to generate the airtable submission button and click on it to submit your information to airtable.if it is the last tutorial of the day your button will look like this and take you to the end of day survey: otherwise it look like this: It is critical that you push the submit button for every tutorial you run. even if you don't finish the tutorial, still submit!Submitting is the only way we can verify that you attempted each tutorial, which is critical for the award of your completion certificate at the end of the course.Finally, we try to keep the airtable code as hidden as possible, but if you ever see any calls to `atform` such as `atform.add_event()` in the coding exercises, just know that is for saving airtable information only. It will not affect the code that is being run around it in any way , so please do not modify, comment out, or worry about any of those lines of code.Now, lets try submitting today's course to Airtable by running the next cell and clicking the button when it appears.
###Code
# @title Airtable Submission Link
from IPython import display
display.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link to survey" style="width:410px"></a>
</div>""" )
###Output
_____no_output_____
###Markdown
--- Bonus - 60 years of Machine Learning Research in one Plotby [Hendrik Strobelt](http://hendrik.strobelt.com) (MIT-IBM Watson AI Lab) with support from Benjamin Hoover.In this notebook we visualize a subset* of 3,300 articles retreived from the AllenAI [S2ORC dataset](https://github.com/allenai/s2orc). We represent each paper by a position that is output of a dimensionality reduction method applied to a vector representation of each paper. The vector representation is output of a neural network.*The selection is very biased on the keywords and methodology we used to filter. Please see the details section to learn about what we did.
###Code
# @title Import `altair` and load the data
!pip install altair vega_datasets --quiet
import requests
import altair as alt # altair is defining data visualizations
# Source data files
# Position data file maps ID to x,y positions
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc.pos_umap_cosine_100_d0.1.json
POS_FILE = 'https://osf.io/qyrfn/download'
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc_clean.csv
# Metadata file maps ID to title, abstract, author,....
META_FILE = 'https://osf.io/vfdu6/download'
# data loading and wrangling
def load_data():
positions = pd.read_json(POS_FILE)
positions[['x', 'y']] = positions['pos'].to_list()
meta = pd.read_csv(META_FILE)
return positions.merge(meta, left_on='id', right_on='paper_id')
# load data
data = load_data()
# @title Define Visualization using ALtair
YEAR_PERIOD = "quinquennial" # @param
selection = alt.selection_multi(fields=[YEAR_PERIOD], bind='legend')
data[YEAR_PERIOD] = (data["year"] / 5.0).apply(np.floor) * 5
chart = alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count"]], width=800,
height=800).mark_circle(radius=2, opacity=0.2).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False, clamp=True, domain=list(range(1955,2020,5))),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
# size='citation_count',
# color="decade:O",
opacity=alt.condition(selection, alt.value(.8), alt.value(0.2)),
).add_selection(
selection
).interactive()
###Output
_____no_output_____
###Markdown
Lets look at the Visualization. Each dot represents one paper. Close dots mean that the respective papers are closer related than distant ones. The color indicates the 5-year period of when the paper was published. The dot size indicates the citation count (within S2ORC corpus) as of July 2020. The view is **interactive** and allows for three main interactions. Try them and play around.1. hover over a dot to see a tooltip (title, author)2. select a year in the legend (right) to filter dots2. zoom in/out with scroll -- double click resets view
###Code
chart
###Output
_____no_output_____
###Markdown
QuestionsBy playing around, can you find some answers to the follwing questions?1. Can you find topical clusters? What cluster might occur because of a filtering error?2. Can you see a temporal trend in the data and clusters?2. Can you determine when deep learning methods started booming ?3. Can you find the key papers that where written before the DL "winter" that define milestones for a cluster? (tipp: look for large dots of different color) MethodsHere is what we did:1. Filtering of all papers who fullfilled the criterria: - are categorized as `Computer Science` or `Mathematics` - one of the following keywords appearing in title or abstract: `"machine learning|artificial intelligence|neural network|(machine|computer) vision|perceptron|network architecture| RNN | CNN | LSTM | BLEU | MNIST | CIFAR |reinforcement learning|gradient descent| Imagenet "`2. per year, remove all papers that are below the 99 percentile of citation count in that year3. embed each paper by using abstract+title in SPECTER model4. project based on embedding using UMAP5. visualize using Altair Find Authors
###Code
# @title Edit the `AUTHOR_FILTER` variable to full text search for authors.
AUTHOR_FILTER = "Rush " # @param space at the end means "word border"
### Don't ignore case when searching...
FLAGS = 0
### uncomment do ignore case
# FLAGS = re.IGNORECASE
## --- FILTER CODE.. make it your own ---
import re
data['issel'] = data['authors'].str.contains(AUTHOR_FILTER, na=False, flags=FLAGS, )
if data['issel'].mean()<0.0000000001:
print('No match found')
## --- FROM HERE ON VIS CODE ---
alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count", "issel"]], width=800,
height=800) \
.mark_circle(stroke="black", strokeOpacity=1).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.StrokeWidth('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[0, 2]), legend=None),
alt.Opacity('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[.2, 1]), legend=None),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
).interactive()
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Deepak Raya, Siwei Bai, Kelson Shilling-Scrivo__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial today
# @markdown If you want to locally dowload the slides, click [here](https://osf.io/wcjrv/download)
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting. In some tutorials, you will notice that we install some dependencies even if they are preinstalled on google colab or kaggle. This happens because we have added automation to our repository through [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions).Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D1_T1','https://portal.neuromatchacademy.org/api/redirect/to/97e94a29-0b3a-4e16-9a8d-f6838a5bd83d')
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
# TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20, 21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20, 21)} (shape: {torch.ones(20, 21).shape})")
if not np.array_equal( B.numpy(),np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device='cpu'):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
if device == 'cpu':
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
else:
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Google Colab users***Scratch Code Cells*If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course*Time estimate: ~25mins*
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing
atform.add_event('Video 1: Welcome and History')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 2: Why DL is cool')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch*Time estimate: ~2 hours 05 mins* PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 3: Making Tensors')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print(f"Tensor x: {x}")
print(f"Tensor y: {y}")
print(f"Tensor z: {z}")
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
print(f"Tensor d: {d}")
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `True`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the cell multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# add timing to airtable
atform.add_event('Coding Exercise 2.1: Creating Tensors')
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ad4f6c0f.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 4: Tensor Operators')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.T```. Note the lack of brackets for ```Tensor.T``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-simple_operations')
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_5562ea1d.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-dot_product')
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_00491ea4.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 5: Tensor Indexing')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easily mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see ```.transpose()``` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar, e.g.,:$ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix} \,$and$ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.,:$ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.,:$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix} \,$and $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix} \, $so$ \, Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$and$ \, E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix} \,$so$ \, Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** `torch.numel()` is an easy way of finding the number of elements in a tensor
###Code
def functionA(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`
and returns the column sum of
`my_tensor1` multiplied by the sum of all the elmements of `my_tensor2`,
i.e., a scalar.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `my_tensor1` by the sum of
`my_tensor2`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(my_tensor):
"""
This function takes in a square matrix `my_tensor` and returns a 2D tensor
consisting of a flattened `my_tensor` with the index of each element
appended to this tensor in the row dimension.
Args:
my_tensor: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor `my_tensor`
my_tensor = ...
# TODO create the idx tensor to be concatenated to `my_tensor`
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`.
If the dimensions allow it, it returns the
elementwise sum of `my_tensor1`-shaped `my_tensor2`, and `my_tensor2`;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape `my_tensor2` into the shape of `my_tensor1`
if ...:
# TODO reshape `my_tensor2` into the shape of `my_tensor1`
my_tensor2 = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
my_tensor1 = ...
my_tensor2 = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
# add timing to airtable
atform.add_event('Coding Exercise 2.3: Manipulating Tensors')
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ea1718cb.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([ 1, -1, -1, 3, 2, 3, 0])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 6: GPU vs CPU')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Here, we define the function `set_device`, which returns the device use in the notebook, i.e., `cpu` or `cuda`. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as```pythonDEVICE = set_device()```Let's define the function using the PyTorch package `torch.cuda`, which is lazily initialized, so we can always import it, and use `is_available()` to determine if our system supports CUDA.
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
###Output
_____no_output_____
###Markdown
Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the `.to()` method as before, or the `.cpu()` and `.cuda()` methods. Note that using the `.cuda()` will throw an error if CUDA is not enabled in your machine.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call.
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function `simpleFun`. Complete this function, such that it performs the operations:- elementwise multiplication- matrix multiplicationThe operations should be able to perfomed on either the CPU or GPU specified by the parameter `device`. We will use the helper function `timeFun(f, dim, iterations, device)`.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the function, but
## ensure all computations happens on the `device`
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
x = ...
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
y = ...
# 2D tensor filled with the scalar value 2, dim x dim
z = ...
# elementwise multiplication of x and y
a = ...
# matrix multiplication of x and y
b = ...
del x
del y
del z
del a
del b
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(f=simpleFun, dim=dim, iterations=iterations)
# timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_232a94a4.py) Sample output (depends on your hardware)```time taken for 1 iterations of simpleFun(10000, cpu): 23.74070time taken for 1 iterations of simpleFun(10000, cuda): 0.87535``` **Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 7: Getting Data')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples: {len(cifar10_data)}")
print(f"Class names: {cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print(f"Label: {cifar10_data.classes[label]}")
print(f"Image size: {image.shape}")
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
# plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_b04bd357.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 8: Train and Test')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 9: Data Augmentation - Transformations')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` **Note:** For the `seed_worker` to have an effect, `num_workers` should be 2 or more. We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: load the CIFAR10 data,
## but as grayscale images and not as RGB colored.
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
plt.show()
return data
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# data = my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_6052d728.py)*Example output:* --- Section 3: Neural Networks*Time estimate: ~1 hour 30 mins (excluding video)* Now it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 10: CSV Files')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in various formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
plt.show()
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 11: Generating the Neural Network')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:`__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.`forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.`predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.`train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:\n", X_samples)
## Do a forward pass of the network
# output = ...
# print("\nNetwork output:\n", output)
## Predict the label of each point
# y_predicted = ...
# print("\nPredicted labels:\n", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_af8ae0ff.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[ 0.1543, -0.8018], [ 2.2077, -2.9859], [-0.5745, -0.0195], [ 0.1924, -0.8367], [ 0.1818, -0.8301]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 1, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 12: Train the Network')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we will cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 13: Play with it')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 14: XOR Widget')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{array}{ccc}X & Y & \text{XOR} \\\hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\\end{array}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: Ethics And Course Info
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the [visualization of ICLR papers](https://iclr.cc/virtual/2021/paper_vis.html). Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? --- Submit to Airtable
###Code
# @title Video 18: Submission info
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This is Darryl, the Deep Learning Dapper Lion, and he's here to teach you about content submission to airtable. At the end of each tutorial there will be an Airtable Submission Cell. Run the cell to generate the airtable submission button and click on it to submit your information to airtable.if it is the last tutorial of the day your button will look like this and take you to the end of day survey: otherwise it look like this: It is critical that you push the submit button for every tutorial you run. even if you don't finish the tutorial, still submit!Submitting is the only way we can verify that you attempted each tutorial, which is critical for us to be able to track your progress. TL;DR: Basic tutorial workflow1. work through the tutorial, answering Think! questions and code exercises2. at end each tutorial, (even if tutorial incomplete) run the airtable submission code cell3. Push the submission button4. if the last tutorial of the day, Submission button will also take you to the end of the day survey on a new page. complete that and submit it. Submission FAQs: 1. What if I want to change my answers to previous discussion questions? > you are free to change and resubmit any of the answers and Think! questions as many times as you like. However, please only run the airtable submission code and click on the link once you are ready to submit.2. Okay, but what if I submitted my airtable anyway and reallly want to resubmit?> After making changes, you can re-run the airtable submission cell code cell. This will result in a second submission from you for the data. This will make Darryl sad as it will be more work for him to clean up the data later. 3. HELP! I accidentally ran the code to generate the airtable submission button before I was ready to submit! what do I do?> If you run the code to generate the link, anything that happens afterwards will not be captured. Complete the tutorial and make sure to re-run the airtable submission again when you are finished before pressing the submission button. 4. What if I want to work on this on my own later, should I wait to submit until I'm finished?> Please submit wherever you are at the end of the day. It's graet that you want to keep working on this, but it's important to see the places where we tried things that didn't quite work out, so we can fix them for next year. Finally, we try to keep the airtable code as hidden as possible, but if you ever see any calls to `atform` such as `atform.add_event()` in the coding exercises, just know that is for saving airtable information only. It will not affect the code that is being run around it in any way , so please do not modify, comment out, or worry about any of those lines of code.Now, lets try submitting today's course to Airtable by running the next cell and clicking the button when it appears.
###Code
# @title Airtable Submission Link
from IPython import display as IPyDisplay
IPyDisplay.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link to survey" style="width:410px"></a>
</div>""" )
###Output
_____no_output_____
###Markdown
--- Bonus - 60 years of Machine Learning Research in one Plotby [Hendrik Strobelt](http://hendrik.strobelt.com) (MIT-IBM Watson AI Lab) with support from Benjamin Hoover.In this notebook we visualize a subset* of 3,300 articles retreived from the AllenAI [S2ORC dataset](https://github.com/allenai/s2orc). We represent each paper by a position that is output of a dimensionality reduction method applied to a vector representation of each paper. The vector representation is output of a neural network.*The selection is very biased on the keywords and methodology we used to filter. Please see the details section to learn about what we did.
###Code
# @title Import `altair` and load the data
!pip install altair vega_datasets --quiet
import requests
import altair as alt # altair is defining data visualizations
# Source data files
# Position data file maps ID to x,y positions
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc.pos_umap_cosine_100_d0.1.json
POS_FILE = 'https://osf.io/qyrfn/download'
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc_clean.csv
# Metadata file maps ID to title, abstract, author,....
META_FILE = 'https://osf.io/vfdu6/download'
# data loading and wrangling
def load_data():
positions = pd.read_json(POS_FILE)
positions[['x', 'y']] = positions['pos'].to_list()
meta = pd.read_csv(META_FILE)
return positions.merge(meta, left_on='id', right_on='paper_id')
# load data
data = load_data()
# @title Define Visualization using ALtair
YEAR_PERIOD = "quinquennial" # @param
selection = alt.selection_multi(fields=[YEAR_PERIOD], bind='legend')
data[YEAR_PERIOD] = (data["year"] / 5.0).apply(np.floor) * 5
chart = alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count"]], width=800,
height=800).mark_circle(radius=2, opacity=0.2).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False, clamp=True, domain=list(range(1955,2020,5))),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
# size='citation_count',
# color="decade:O",
opacity=alt.condition(selection, alt.value(.8), alt.value(0.2)),
).add_selection(
selection
).interactive()
###Output
_____no_output_____
###Markdown
Lets look at the Visualization. Each dot represents one paper. Close dots mean that the respective papers are closer related than distant ones. The color indicates the 5-year period of when the paper was published. The dot size indicates the citation count (within S2ORC corpus) as of July 2020. The view is **interactive** and allows for three main interactions. Try them and play around.1. hover over a dot to see a tooltip (title, author)2. select a year in the legend (right) to filter dots2. zoom in/out with scroll -- double click resets view
###Code
chart
###Output
_____no_output_____
###Markdown
QuestionsBy playing around, can you find some answers to the follwing questions?1. Can you find topical clusters? What cluster might occur because of a filtering error?2. Can you see a temporal trend in the data and clusters?2. Can you determine when deep learning methods started booming ?3. Can you find the key papers that where written before the DL "winter" that define milestones for a cluster? (tipp: look for large dots of different color) MethodsHere is what we did:1. Filtering of all papers who fullfilled the criterria: - are categorized as `Computer Science` or `Mathematics` - one of the following keywords appearing in title or abstract: `"machine learning|artificial intelligence|neural network|(machine|computer) vision|perceptron|network architecture| RNN | CNN | LSTM | BLEU | MNIST | CIFAR |reinforcement learning|gradient descent| Imagenet "`2. per year, remove all papers that are below the 99 percentile of citation count in that year3. embed each paper by using abstract+title in SPECTER model4. project based on embedding using UMAP5. visualize using Altair Find Authors
###Code
# @title Edit the `AUTHOR_FILTER` variable to full text search for authors.
AUTHOR_FILTER = "Rush " # @param space at the end means "word border"
### Don't ignore case when searching...
FLAGS = 0
### uncomment do ignore case
# FLAGS = re.IGNORECASE
## --- FILTER CODE.. make it your own ---
import re
data['issel'] = data['authors'].str.contains(AUTHOR_FILTER, na=False, flags=FLAGS, )
if data['issel'].mean()<0.0000000001:
print('No match found')
## --- FROM HERE ON VIS CODE ---
alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count", "issel"]], width=800,
height=800) \
.mark_circle(stroke="black", strokeOpacity=1).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.StrokeWidth('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[0, 2]), legend=None),
alt.Opacity('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[.2, 1]), legend=None),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
).interactive()
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Kelson Shilling-Scrivo, Deepak Raya, Siwei Bai__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting. In some tutorials, you will notice that we install some dependencies even if they are preinstalled on google colab or kaggle. This happens because we have added automation to our repository through [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions).Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D1_T1','https://portal.neuromatchacademy.org/api/redirect/to/97e94a29-0b3a-4e16-9a8d-f6838a5bd83d')
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
# TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20, 21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20, 21)} (shape: {torch.ones(20, 21).shape})")
if not np.array_equal( B.numpy(),np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device='cpu'):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
print(f"time taken for {iterations} iterations of {f.__name__}({dim}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Google Colab users***Scratch Code Cells*If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing
atform.add_event('Video 1: Welcome and History')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 2: Why DL is cool')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 3: Making Tensors')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print(f"Tensor x: {x}")
print(f"Tensor y: {y}")
print(f"Tensor z: {z}")
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
print(f"Tensor d: {d}")
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `True`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the cell multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# add timing to airtable
atform.add_event('Coding Exercise 2.1: Creating Tensors')
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ad4f6c0f.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 4: Tensor Operators')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.t```. Note the lack of brackets for ```Tensor.t``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-simple_operations')
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_5562ea1d.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-dot_product')
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_00491ea4.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 5: Tensor Indexing')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easily mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see ```.transpose()``` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar, e.g.,:$ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix} \,$and$ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.,:$ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.,:$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix} \,$and $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix} \, $so$ \, Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$and$ \, E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix} \,$so$ \, Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** `torch.numel()` is an easy way of finding the number of elements in a tensor
###Code
def functionA(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`
and returns the column sum of
`my_tensor1` multiplied by the sum of all the elmements of `my_tensor2`,
i.e., a scalar.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `my_tensor1` by the sum of
`my_tensor2`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(my_tensor):
"""
This function takes in a square matrix `my_tensor` and returns a 2D tensor
consisting of a flattened `my_tensor` with the index of each element
appended to this tensor in the row dimension.
Args:
my_tensor: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor `my_tensor`
my_tensor = ...
# TODO create the idx tensor to be concatenated to `my_tensor`
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`.
If the dimensions allow it, it returns the
elementwise sum of `my_tensor1`-shaped `my_tensor2`, and `my_tensor2`;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape `my_tensor2` into the shape of `my_tensor1`
if ...:
# TODO reshape `my_tensor2` into the shape of `my_tensor1`
my_tensor2 = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
my_tensor1 = ...
my_tensor2 = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
# add timing to airtable
atform.add_event('Coding Exercise 2.3: Manipulating Tensors')
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ea1718cb.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([ 1, -1, -1, 3, 2, 3, 0])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 6: GPU vs CPU')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Here, we define the function `set_device`, which returns the device use in the notebook, i.e., `cpu` or `cuda`. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as```pythonDEVICE = set_device()```Let's define the function using the PyTorch package `torch.cuda`, which is lazily initialized, so we can always import it, and use `is_available()` to determine if our system supports CUDA.
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
###Output
_____no_output_____
###Markdown
Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the `.to()` method as before, or the `.cpu()` and `.cuda()` methods. Note that using the `.cuda()` will throw an error if CUDA is not enabled in your machine.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call.
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function. Complete the second function, such that it is performs the same operations as the first function, but entirely on the GPU. We will use the helper function `timeFun(f, dim, iterations, device)`.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda:0"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
x = ...
y = ...
z = ...
x = ...
y = ...
del x
del y
del z
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(f=simpleFun, dim=dim, iterations=iterations)
# timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_032dcba8.py) Sample output (depends on your hardware)```time taken for 1 iterations of simpleFun(10000): 28.50481time taken for 1 iterations of simpleFunGPU(10000): 0.91102``` **Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 7: Getting Data')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples: {len(cifar10_data)}")
print(f"Class names: {cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print(f"Label: {cifar10_data.classes[label]}")
print(f"Image size: {image.shape}")
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
# plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_b04bd357.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 8: Train and Test')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 9: Data Augmentation - Transformations')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` **Note:** For the `seed_worker` to have an effect, `num_workers` should be 2 or more. We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
plt.show()
return data
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# data = my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_6052d728.py)*Example output:* --- Section 3: Neural NetworksNow it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 10: CSV Files')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in various formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
plt.show()
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 11: Generating the Neural Network')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:`__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.`forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.`predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.`train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:\n", X_samples)
## Do a forward pass of the network
# output = ...
# print("\nNetwork output:\n", output)
## Predict the label of each point
# y_predicted = ...
# print("\nPredicted labels:\n", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_af8ae0ff.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[ 0.1543, -0.8018], [ 2.2077, -2.9859], [-0.5745, -0.0195], [ 0.1924, -0.8367], [ 0.1818, -0.8301]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 1, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 12: Train the Network')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we wil cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 13: Play with it')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 14: XOR Widget')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{array}{ccc}X & Y & \text{XOR} \\\hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\\end{array}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: Ethics And Course Info
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the [visualization of ICLR papers](https://iclr.cc/virtual/2021/paper_vis.html). Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? --- Submit to Airtable
###Code
# @title Video 18: Submission info
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This is Darryl, the Deep Learning Dapper Lion, and he's here to teach you about content submission to airtable. At the end of each tutorial there will be an Airtable Submission Cell. Run the cell to generate the airtable submission button and click on it to submit your information to airtable.if it is the last tutorial of the day your button will look like this and take you to the end of day survey: otherwise it look like this: It is critical that you push the submit button for every tutorial you run. even if you don't finish the tutorial, still submit!Submitting is the only way we can verify that you attempted each tutorial, which is critical for the award of your completion certificate at the end of the course.Finally, we try to keep the airtable code as hidden as possible, but if you ever see any calls to `atform` such as `atform.add_event()` in the coding exercises, just know that is for saving airtable information only. It will not affect the code that is being run around it in any way , so please do not modify, comment out, or worry about any of those lines of code.Now, lets try submitting today's course to Airtable by running the next cell and clicking the button when it appears.
###Code
# @title Airtable Submission Link
from IPython import display
display.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link to survey" style="width:410px"></a>
</div>""" )
###Output
_____no_output_____
###Markdown
--- Bonus - 60 years of Machine Learning Research in one Plotby [Hendrik Strobelt](http://hendrik.strobelt.com) (MIT-IBM Watson AI Lab) with support from Benjamin Hoover.In this notebook we visualize a subset* of 3,300 articles retreived from the AllenAI [S2ORC dataset](https://github.com/allenai/s2orc). We represent each paper by a position that is output of a dimensionality reduction method applied to a vector representation of each paper. The vector representation is output of a neural network.*The selection is very biased on the keywords and methodology we used to filter. Please see the details section to learn about what we did.
###Code
# @title Import `altair` and load the data
!pip install altair vega_datasets --quiet
import requests
import altair as alt # altair is defining data visualizations
# Source data files
# Position data file maps ID to x,y positions
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc.pos_umap_cosine_100_d0.1.json
POS_FILE = 'https://osf.io/qyrfn/download'
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc_clean.csv
# Metadata file maps ID to title, abstract, author,....
META_FILE = 'https://osf.io/vfdu6/download'
# data loading and wrangling
def load_data():
positions = pd.read_json(POS_FILE)
positions[['x', 'y']] = positions['pos'].to_list()
meta = pd.read_csv(META_FILE)
return positions.merge(meta, left_on='id', right_on='paper_id')
# load data
data = load_data()
# @title Define Visualization using ALtair
YEAR_PERIOD = "quinquennial" # @param
selection = alt.selection_multi(fields=[YEAR_PERIOD], bind='legend')
data[YEAR_PERIOD] = (data["year"] / 5.0).apply(np.floor) * 5
chart = alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count"]], width=800,
height=800).mark_circle(radius=2, opacity=0.2).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False, clamp=True, domain=list(range(1955,2020,5))),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
# size='citation_count',
# color="decade:O",
opacity=alt.condition(selection, alt.value(.8), alt.value(0.2)),
).add_selection(
selection
).interactive()
###Output
_____no_output_____
###Markdown
Lets look at the Visualization. Each dot represents one paper. Close dots mean that the respective papers are closer related than distant ones. The color indicates the 5-year period of when the paper was published. The dot size indicates the citation count (within S2ORC corpus) as of July 2020. The view is **interactive** and allows for three main interactions. Try them and play around.1. hover over a dot to see a tooltip (title, author)2. select a year in the legend (right) to filter dots2. zoom in/out with scroll -- double click resets view
###Code
chart
###Output
_____no_output_____
###Markdown
QuestionsBy playing around, can you find some answers to the follwing questions?1. Can you find topical clusters? What cluster might occur because of a filtering error?2. Can you see a temporal trend in the data and clusters?2. Can you determine when deep learning methods started booming ?3. Can you find the key papers that where written before the DL "winter" that define milestones for a cluster? (tipp: look for large dots of different color) MethodsHere is what we did:1. Filtering of all papers who fullfilled the criterria: - are categorized as `Computer Science` or `Mathematics` - one of the following keywords appearing in title or abstract: `"machine learning|artificial intelligence|neural network|(machine|computer) vision|perceptron|network architecture| RNN | CNN | LSTM | BLEU | MNIST | CIFAR |reinforcement learning|gradient descent| Imagenet "`2. per year, remove all papers that are below the 99 percentile of citation count in that year3. embed each paper by using abstract+title in SPECTER model4. project based on embedding using UMAP5. visualize using Altair Find Authors
###Code
# @title Edit the `AUTHOR_FILTER` variable to full text search for authors.
AUTHOR_FILTER = "Rush " # @param space at the end means "word border"
### Don't ignore case when searching...
FLAGS = 0
### uncomment do ignore case
# FLAGS = re.IGNORECASE
## --- FILTER CODE.. make it your own ---
import re
data['issel'] = data['authors'].str.contains(AUTHOR_FILTER, na=False, flags=FLAGS, )
if data['issel'].mean()<0.0000000001:
print('No match found')
## --- FROM HERE ON VIS CODE ---
alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count", "issel"]], width=800,
height=800) \
.mark_circle(stroke="black", strokeOpacity=1).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.StrokeWidth('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[0, 2]), legend=None),
alt.Opacity('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[.2, 1]), legend=None),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
).interactive()
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 1, Tutorial 1 Pytorch__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent__Content reviewers:__ Kelson Shilling-Scrivo, Deepak Raya__Content editors:__ Anoop Kulkarni__Production editors:__ Arush Tagade, Spiros Chavlis --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole --- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, numpy); set global or environment variables, and load in helper functions for things like plotting.Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
#@title Imports
import torch
import numpy as np
from torch import nn
import matplotlib.pyplot as plt
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
import time
###Output
_____no_output_____
###Markdown
---
###Code
#@title Helper Functions
def checkExercise1(A: torch.Tensor, B: torch.Tensor ,C:torch.Tensor, D:torch.Tensor):
errors = []
#TODO better errors
if not torch.equal(A,torch.ones(20,21)):
errors.append("A is not a 20 by 21 tensor of ones ")
if not np.array_equal( B.numpy(),np.vander([1,2,3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20,21):
errors.append("C is not the correct shape ")
if not torch.equal(D,torch.arange(4,41,step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
print(errors)
def timeFun(f, iterations):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f()
end = time.time()
t_total += end - start
print(f"time taken for {iterations} iterations of {f.__name__}: {t_total}")
###Output
_____no_output_____
###Markdown
Section 1: Welcome to Neuromatch Deep learning course
###Code
#@title Video 1.1: Welcome and History
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="ca21SNqt78I", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [code of conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).Code of conductTODO: ADD EXERCISE: DESCRIBE WHAT YOU HOPE TO GET OUT OF THIS COURSE IN ABOUT 100 WORDS.
###Code
#@title Video 1.2: Syllabus
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="cDvAqG_hAvQ", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the visualization of ICLR papers. Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? Section 2: The Basics of PyTorch PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
#@title Video 2.1: Making Tensors
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="jGKd_4tPGrw", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0,1,2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print("Tensor a:", a)
print("Tensor b:", b)
print("Tensor c:", c)
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1,5)
print("Tensor x:", x)
print("Tensor y:", y)
print("Tensor z:", z)
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print("Tensor a: ", a)
print("Tensor b: ", b)
print("Tensor c: ", c)
print("Tensor d: ", d)
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print("Tensor a: ", a)
print("Numpy array b: ", b)
print("Tensor c: ", c)
print("Numpy array d: ", d)
###Output
_____no_output_____
###Markdown
Exercise 1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_abcfc5da.py) Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
#@title Video 2.2: Tensor Operators
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="R1R8VoYXBVA", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
torch.add(a, b, out=c)
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1.0, 2, 4, 8])
y = torch.tensor([2, 2, 2, 2])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print("Sum of every element of x: ", x.sum())
print("Sum of the columns of x: ", x.sum(axis=0))
print("Sum of the rows of x: ", x.sum(axis=1))
print("\n")
print("Mean value of all elements of x ", x.mean())
print("Mean values of the columns of x ", x.mean(axis=0))
print("Mean values of the rows of x ", x.mean(axis=1))
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```.torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.t```. Note the lack of brackets for ```Tensor.t``` - it is an attribute, not a method. Exercise 2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
def simple_operations(a1):
################################################
## TODO for students: create the a2 and a3 matrices
## from the first expression
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
a2 = ...
a3 = ...
answer = ...
return answer
## TODO for students: complete the function above and assign
## the result to a tensor named A
#A = simple_operations(a1)
#print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_425bd3c3.py)
###Code
# Computing expression 2:
def dot_product():
###############################################
## TODO for students: create the b1 and b2 matrices
## from the second expression
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
b1 = ...
b2 = ...
product = ...
return product
## TODO for students: compute the expression above and assign
## the result to a tensor named b
#b = dot_product()
#print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_cf6b9a5f.py) Section 2.3 Manipulating Tensors in Pytorch
###Code
#@title Video 2.3: Tensor Indexing
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="0d0KSJ3lJbg", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(" shape of x[0]:", x[0].shape)
print(" shape of x[0][0]:", x[0][0].shape)
print(" shape of x[0][0][0]:", x[0][0][0].shape)
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print("Original z: \n ", z)
# 2D -> 1D
z = z.flatten()
print("Flattened z: \n ", z)
# and back to 2D
z = z.reshape(3, 4)
print("Reshaped (3x4) z: \n", z)
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easilly mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print("x[0]: ", x[0])
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print("x[0]: ", x[0])
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print("shape of y: ", y.shape)
# lets insert a singleton dimension
y = y.unsqueeze(1)
print("shape of y: ", y.shape)
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
**Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Exercise 3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar. e.g: $ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix}$ $ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix}$$ Out = 12 * \begin{bmatrix}2 & 2\\\end{bmatrix} = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension. e.g: $ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix}$ $ Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C (maybe cut this depending on time constraints)**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $E$ reshaped into the dimensions of $D$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors. e.g. $ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$ $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix}$ $ Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$ $ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$ $ E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix}$ $ Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** ```torch.numel()``` is an easy way of finding the number of elements in a tensor
###Code
################################################
## TODO for students: complete these functions
def functionA(A: torch.Tensor, B: torch.Tensor) -> torch.Tensor:
## TODO for students
raise NotImplementedError("Student exercise: complete function A")
output = torch.zeros(2)
return output
def functionB(C: torch.Tensor) -> torch.Tensor:
raise NotImplementedError("Student exercise: complete function B")
# TODO flatten the tensor C
C = ...
# TODO create the idx tensor to be concatenated to C
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
output = torch.zeros(1)
return output
def functionC(D: torch.Tensor, E: torch.Tensor) -> torch.Tensor:
raise NotImplementedError("Student exercise: complete function C")
# TODO check we can reshape E into the shape of D
if ... :
# TODO reshape E into the shape of D
E = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
D = ...
E = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
##TODO: Implement the functions above and then uncomment the following lines to test your code
#print(functionA(torch.tensor([[1,1], [1,1]]), torch.tensor([ [1,2,3],[1,2,3] ]) ))
#print(functionB(torch.tensor([ [2,3],[-1,10] ])))
#print(functionC(torch.tensor([[1, -1],[-1,3]]), torch.tensor([[2,3,0,2]])))
#print(functionC(torch.tensor([[1, -1],[-1,3]]), torch.tensor([[2,3,0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_66b7c15d.py) Section 2.4: GPUs
###Code
#@title Video 2.4: GPU vs CPU
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="9Mc9GFUtILY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will note have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
device = "cuda" if torch.cuda.is_available() else "cpu"
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=device)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2,2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(device)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device="cuda")
y = torch.tensor([3, 4, 5], device="cpu")
#Uncomment the following line and run this cell
#z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the ```.to()``` method as before, or the ```.cpu()``` and ```.cuda()``` methods.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call
###Code
x = torch.tensor([0, 1, 2], device="cuda")
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device="cuda")
# moving to cpu
x = x.cpu()
print(x + y)
# moving to gpu
y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Exercise 4: Just how much faster are GPUs?Below is a simple function. Complete the second function, such that it is performs the same operations as the first function, but entirely on the GPU.
###Code
def simpleFun():
x = torch.rand(10000, 10000)
y = torch.rand_like(x)
z = 2*torch.ones(10000, 10000)
x = x * y
x = x @ z
def simpleFunGPU():
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
x = ...
y = ...
z = ...
x = ...
y = ...
##TODO: Implement the function above and uncomment the following lines to test your code
#timeFun(simpleFun, iterations = 1 )
#timeFun(simpleFunGPU, iterations = 1)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_92b36af6.py) Section 2.5: Datasets and Dataloaders
###Code
#@title Video 2.5: Getting Data
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="LSkjPM1gFu0", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print('Number of samples:', len(cifar10_data))
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
import random
# Predefined label names
cifar10_labels = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
# Choose a random sample
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print('Label:', cifar10_labels[label])
print('Image size:', image.shape)
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels of the image (in this case we have RGB images). The second dimensions is the height of the image and the third is the width. We can denote this image format as C × H × W. Exercise 5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - H × W × C.You need to reorder the dimensions of the tensor using the `permute` method of the tensor.
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_82257c1c.py)*Example output:*
###Code
#@title Video 2.6: Train and Test
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="JokSIuPs-ys", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
#@title Video 2.7: Data Augmentation - Transformations
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="sjegA9OBUPw", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
We can now query the next batch from the data loader and inspect it. We can now see that we have a 4D tensor. This is because we have a 64 images in the batch and each image has 3 dimensions: channels, height and width.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Exercise 6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images.
###Code
from torchvision.transforms import Compose, Grayscale
# TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
# data = datasets.CIFAR10( ...
# TODO After implementing the above code, uncomment the following lines to test your code
# Display a random grayscale image
# image, label = data[random.randint(0, len(data))]
# plt.imshow(image.squeeze(), cmap="gray")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_1d2870c1.py)*Example output:* Section 3: Neural NetworksNow it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
#@title Video 3.1: CSV Files
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="JrC_UAJWYKU", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
#@title Generate sample data
import sklearn.datasets
import pandas as pd
# Create a dataset of 256 points with a little noise
X, y = sklearn.datasets.make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in varios formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
import pandas as pd
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print("Size X:", X_orig.shape)
print("Size y:", y_orig.shape)
# Visualize the dataset
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Convert the 2D points to a float tensor
X = torch.from_numpy(X_orig).type(torch.FloatTensor)
# Upload the tensor to the device
X = X.to(device)
print("Size X:", X.shape)
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(device)
print("Size y:", y.shape)
#@title Video 3.2: Generating the Neural Network
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="PwSzRohUvck", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural NetworkFor this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods: `__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc. `forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it. `predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to interpret the result of the network as a probability distribution. `train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
import torch.nn.functional as F
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU)
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Convert the output of the network to a probability distribution
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(seld, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(device)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Exercise 7: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet.
###Code
#X_samples = ...
#print("Sample input:", X_samples)
# Do a forward pass of the network
#output = ...
#print("Network output:", output)
# Predict the label of each point
# y_predicted = ...
# print("Predicted labels:", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_f4cd20c3.py) Section 3.3: Train Your Neural Network
###Code
#@title Video 3.3: Train the Network
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4MIqnE4XPaA", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. We will not go into details of the training process for now - this will be covered in the next days. The goal for now is to see your network in action.
###Code
#@title Helper function to plot the decision boundary
from pathlib import Path
def plot_decision_boundary(model, X, y):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function
def train(self, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
y_logits = model(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Replace the train function in the NaiveNet class
NaiveNet.train = train
# Create a new network instance a train it
model = NaiveNet().to(device)
losses = model.train(X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
#@title Visualize the training process
import imageio
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
imageio.mimsave('frames/movie.gif', images)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from IPython import display
from pathlib import Path
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display.Image(data=f.read(), format='png')
#@title Video 3.4: Play with it
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="_GGkapdOdSY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Exercise 8: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
#@title Video 3.5: XOR Widget
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="cnu7pyRx_u0", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Exercise 9: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: $$y = f(x_1)+f(x_2)-f((x_1+x_2))$$Try to set the weights and biases to implement this function after you played enough :)
###Code
# @title Interactive Demo
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
#@markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'No' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
#@title Video 4: Ethics
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Kt6JLi3rUFU", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
ETHICS: Let us watch the coded bias movie together and discuss Bonus
###Code
#@title Video 5: Be a group
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Sfp6--d_H1A", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
#@title Video 6: It's a wrap!
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="JwTn7ej2dq8", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Kelson Shilling-Scrivo, Deepak Raya, Siwei Bai__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting. In some tutorials, you will notice that we install some dependencies even if they are preinstalled on google colab or kaggle. This happens because we have added automation to our repository through [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions).Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from evaltools.airtable import AirtableForm
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D1_T1','https://portal.neuromatchacademy.org/api/redirect/to/97e94a29-0b3a-4e16-9a8d-f6838a5bd83d')
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
# TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20, 21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20, 21)} (shape: {torch.ones(20, 21).shape})")
if not np.array_equal( B.numpy(),np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device='cpu'):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
print(f"time taken for {iterations} iterations of {f.__name__}({dim}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Google Colab users***Scratch Code Cells*If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing
atform.add_event('Video 1: Welcome and History')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 2: Why DL is cool')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 3: Making Tensors')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print(f"Tensor x: {x}")
print(f"Tensor y: {y}")
print(f"Tensor z: {z}")
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
print(f"Tensor d: {d}")
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `True`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the cell multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# add timing to airtable
atform.add_event('Coding Exercise 2.1: Creating Tensors')
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ad4f6c0f.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 4: Tensor Operators')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.t```. Note the lack of brackets for ```Tensor.t``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-simple_operations')
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_5562ea1d.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-dot_product')
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_00491ea4.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 5: Tensor Indexing')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easily mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see ```.transpose()``` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar, e.g.,:$ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix} \,$and$ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.,:$ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.,:$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix} \,$and $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix} \, $so$ \, Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$and$ \, E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix} \,$so$ \, Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** `torch.numel()` is an easy way of finding the number of elements in a tensor
###Code
def functionA(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`
and returns the column sum of
`my_tensor1` multiplied by the sum of all the elmements of `my_tensor2`,
i.e., a scalar.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `my_tensor1` by the sum of
`my_tensor2`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(my_tensor):
"""
This function takes in a square matrix `my_tensor` and returns a 2D tensor
consisting of a flattened `my_tensor` with the index of each element
appended to this tensor in the row dimension.
Args:
my_tensor: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor `my_tensor`
my_tensor = ...
# TODO create the idx tensor to be concatenated to `my_tensor`
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`.
If the dimensions allow it, it returns the
elementwise sum of `my_tensor1`-shaped `my_tensor2`, and `my_tensor2`;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape `my_tensor2` into the shape of `my_tensor1`
if ...:
# TODO reshape `my_tensor2` into the shape of `my_tensor1`
my_tensor2 = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
my_tensor1 = ...
my_tensor2 = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
# add timing to airtable
atform.add_event('Coding Exercise 2.3: Manipulating Tensors')
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ea1718cb.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([ 1, -1, -1, 3, 2, 3, 0])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 6: GPU vs CPU')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Here, we define the function `set_device`, which returns the device use in the notebook, i.e., `cpu` or `cuda`. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as```pythonDEVICE = set_device()```Let's define the function using the PyTorch package `torch.cuda`, which is lazily initialized, so we can always import it, and use `is_available()` to determine if our system supports CUDA.
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
###Output
_____no_output_____
###Markdown
Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the `.to()` method as before, or the `.cpu()` and `.cuda()` methods. Note that using the `.cuda()` will throw an error if CUDA is not enabled in your machine.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call.
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function. Complete the second function, such that it is performs the same operations as the first function, but entirely on the GPU. We will use the helper function `timeFun(f, dim, iterations, device)`.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda:0"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
x = ...
y = ...
z = ...
x = ...
y = ...
del x
del y
del z
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(f=simpleFun, dim=dim, iterations=iterations)
# timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_032dcba8.py) Sample output (depends on your hardware)```time taken for 1 iterations of simpleFun(10000): 28.50481time taken for 1 iterations of simpleFunGPU(10000): 0.91102``` **Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 7: Getting Data')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples: {len(cifar10_data)}")
print(f"Class names: {cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print(f"Label: {cifar10_data.classes[label]}")
print(f"Image size: {image.shape}")
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
# plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_b04bd357.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 8: Train and Test')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 9: Data Augmentation - Transformations')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` **Note:** For the `seed_worker` to have an effect, `num_workers` should be 2 or more. We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
plt.show()
return data
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# data = my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_6052d728.py)*Example output:* --- Section 3: Neural NetworksNow it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 10: CSV Files')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in various formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
plt.show()
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 11: Generating the Neural Network')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:`__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.`forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.`predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.`train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:\n", X_samples)
## Do a forward pass of the network
# output = ...
# print("\nNetwork output:\n", output)
## Predict the label of each point
# y_predicted = ...
# print("\nPredicted labels:\n", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_af8ae0ff.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[ 0.1543, -0.8018], [ 2.2077, -2.9859], [-0.5745, -0.0195], [ 0.1924, -0.8367], [ 0.1818, -0.8301]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 1, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 12: Train the Network')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we wil cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 13: Play with it')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 14: XOR Widget')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{array}{ccc}X & Y & \text{XOR} \\\hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\\end{array}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: Ethics And Course Info
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the [visualization of ICLR papers](https://iclr.cc/virtual/2021/paper_vis.html). Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? --- Submit to Airtable
###Code
# @title Video 18: Submission info
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This is Darryl, the Deep Learning Dapper Lion, and he's here to teach you about content submission to airtable. At the end of each tutorial there will be an Airtable Submission Cell. Run the cell to generate the airtable submission button and click on it to submit your information to airtable.if it is the last tutorial of the day your button will look like this and take you to the end of day survey: otherwise it look like this: It is critical that you push the submit button for every tutorial you run. even if you don't finish the tutorial, still submit!Submitting is the only way we can verify that you attempted each tutorial, which is critical for the award of your completion certificate at the end of the course.Finally, we try to keep the airtable code as hidden as possible, but if you ever see any calls to `atform` such as `atform.add_event()`Just know that is for saving airtable information only, it will not affect the code that is being run around it in any way , so please do not modify, comment out, or worry about any of those lines of code.
###Code
# @title Airtable Submission Link
from IPython import display
display.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link to survey" style="width:410px"></a>
</div>""" )
###Output
_____no_output_____
###Markdown
--- Bonus - 60 years of Machine Learning Research in one Plotby [Hendrik Strobelt](http://hendrik.strobelt.com) (MIT-IBM Watson AI Lab) with support from Benjamin Hoover.In this notebook we visualize a subset* of 3,300 articles retreived from the AllenAI [S2ORC dataset](https://github.com/allenai/s2orc). We represent each paper by a position that is output of a dimensionality reduction method applied to a vector representation of each paper. The vector representation is output of a neural network.*The selection is very biased on the keywords and methodology we used to filter. Please see the details section to learn about what we did.
###Code
# @title Import `altair` and load the data
!pip install altair vega_datasets --quiet
import altair as alt # altair is defining data visualizations
# Source data files
# Position data file maps ID to x,y positions
POS_FILE = 'http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc.pos_umap_cosine_100_d0.1.json'
# Metadata file maps ID to title, abstract, author,....
META_FILE = 'http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc_clean.csv'
# data loading and wrangling
def load_data():
positions = pd.read_json(POS_FILE)
positions[['x', 'y']] = positions['pos'].to_list()
meta = pd.read_csv(META_FILE)
return positions.merge(meta, left_on='id', right_on='paper_id')
# load data
data = load_data()
# @title Define Visualization using ALtair
YEAR_PERIOD = "quinquennial" # @param
selection = alt.selection_multi(fields=[YEAR_PERIOD], bind='legend')
data[YEAR_PERIOD] = (data["year"] / 5.0).apply(np.floor) * 5
chart = alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count"]], width=800,
height=800).mark_circle(radius=2, opacity=0.2).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False, clamp=True, domain=list(range(1955,2020,5))),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
# size='citation_count',
# color="decade:O",
opacity=alt.condition(selection, alt.value(.8), alt.value(0.2)),
).add_selection(
selection
).interactive()
###Output
_____no_output_____
###Markdown
Lets look at the Visualization. Each dot represents one paper. Close dots mean that the respective papers are closer related than distant ones. The color indicates the 5-year period of when the paper was published. The dot size indicates the citation count (within S2ORC corpus) as of July 2020. The view is **interactive** and allows for three main interactions. Try them and play around.1. hover over a dot to see a tooltip (title, author)2. select a year in the legend (right) to filter dots2. zoom in/out with scroll -- double click resets view
###Code
chart
###Output
_____no_output_____
###Markdown
QuestionsBy playing around, can you find some answers to the follwing questions?1. Can you find topical clusters? What cluster might occur because of a filtering error?2. Can you see a temporal trend in the data and clusters?2. Can you determine when deep learning methods started booming ?3. Can you find the key papers that where written before the DL "winter" that define milestones for a cluster? (tipp: look for large dots of different color) MethodsHere is what we did:1. Filtering of all papers who fullfilled the criterria: - are categorized as `Computer Science` or `Mathematics` - one of the following keywords appearing in title or abstract: `"machine learning|artificial intelligence|neural network|(machine|computer) vision|perceptron|network architecture| RNN | CNN | LSTM | BLEU | MNIST | CIFAR |reinforcement learning|gradient descent| Imagenet "`2. per year, remove all papers that are below the 99 percentile of citation count in that year3. embed each paper by using abstract+title in SPECTER model4. project based on embedding using UMAP5. visualize using Altair Find Authors
###Code
# @title Edit the `AUTHOR_FILTER` variable to full text search for authors.
AUTHOR_FILTER = "Rush " # @param space at the end means "word border"
### Don't ignore case when searching...
FLAGS = 0
### uncomment do ignore case
# FLAGS = re.IGNORECASE
## --- FILTER CODE.. make it your own ---
import re
data['issel'] = data['authors'].str.contains(AUTHOR_FILTER, na=False, flags=FLAGS, )
if data['issel'].mean()<0.0000000001:
print('No match found')
## --- FROM HERE ON VIS CODE ---
alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count", "issel"]], width=800,
height=800) \
.mark_circle(stroke="black", strokeOpacity=1).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.StrokeWidth('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[0, 2]), legend=None),
alt.Opacity('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[.2, 1]), legend=None),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
).interactive()
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Deepak Raya, Siwei Bai, Kelson Shilling-Scrivo__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial today
# @markdown If you want to locally dowload the slides, click [here](https://osf.io/wcjrv/download)
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting. In some tutorials, you will notice that we install some dependencies even if they are preinstalled on google colab or kaggle. This happens because we have added automation to our repository through [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions).Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D1_T1','https://portal.neuromatchacademy.org/api/redirect/to/97e94a29-0b3a-4e16-9a8d-f6838a5bd83d')
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
# TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20, 21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20, 21)} (shape: {torch.ones(20, 21).shape})")
if not np.array_equal( B.numpy(),np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device='cpu'):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
if device == 'cpu':
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
else:
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Google Colab users***Scratch Code Cells*If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course*Time estimate: ~25mins*
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing
atform.add_event('Video 1: Welcome and History')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 2: Why DL is cool')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch*Time estimate: ~2 hours 05 mins* PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 3: Making Tensors')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print(f"Tensor x: {x}")
print(f"Tensor y: {y}")
print(f"Tensor z: {z}")
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
print(f"Tensor d: {d}")
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `True`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the cell multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# add timing to airtable
atform.add_event('Coding Exercise 2.1: Creating Tensors')
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ad4f6c0f.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 4: Tensor Operators')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.T```. Note the lack of brackets for ```Tensor.T``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-simple_operations')
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_5562ea1d.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-dot_product')
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_00491ea4.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 5: Tensor Indexing')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easily mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see ```.transpose()``` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar, e.g.,:$ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix} \,$and$ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.,:$ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.,:$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix} \,$and $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix} \, $so$ \, Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$and$ \, E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix} \,$so$ \, Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** `torch.numel()` is an easy way of finding the number of elements in a tensor
###Code
def functionA(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`
and returns the column sum of
`my_tensor1` multiplied by the sum of all the elmements of `my_tensor2`,
i.e., a scalar.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `my_tensor1` by the sum of
`my_tensor2`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(my_tensor):
"""
This function takes in a square matrix `my_tensor` and returns a 2D tensor
consisting of a flattened `my_tensor` with the index of each element
appended to this tensor in the row dimension.
Args:
my_tensor: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor `my_tensor`
my_tensor = ...
# TODO create the idx tensor to be concatenated to `my_tensor`
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`.
If the dimensions allow it, it returns the
elementwise sum of `my_tensor1`-shaped `my_tensor2`, and `my_tensor2`;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape `my_tensor2` into the shape of `my_tensor1`
if ...:
# TODO reshape `my_tensor2` into the shape of `my_tensor1`
my_tensor2 = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
my_tensor1 = ...
my_tensor2 = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
# add timing to airtable
atform.add_event('Coding Exercise 2.3: Manipulating Tensors')
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ea1718cb.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([ 1, -1, -1, 3, 2, 3, 0])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 6: GPU vs CPU')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Here, we define the function `set_device`, which returns the device use in the notebook, i.e., `cpu` or `cuda`. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as```pythonDEVICE = set_device()```Let's define the function using the PyTorch package `torch.cuda`, which is lazily initialized, so we can always import it, and use `is_available()` to determine if our system supports CUDA.
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
###Output
_____no_output_____
###Markdown
Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the `.to()` method as before, or the `.cpu()` and `.cuda()` methods. Note that using the `.cuda()` will throw an error if CUDA is not enabled in your machine.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call.
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function `simpleFun`. Complete this function, such that it performs the operations:- elementwise multiplication- matrix multiplicationThe operations should be able to perfomed on either the CPU or GPU specified by the parameter `device`. We will use the helper function `timeFun(f, dim, iterations, device)`.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the function, but
## ensure all computations happens on the `device`
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
x = ...
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
y = ...
# 2D tensor filled with the scalar value 2, dim x dim
z = ...
# elementwise multiplication of x and y
a = ...
# matrix multiplication of x and y
b = ...
del x
del y
del z
del a
del b
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(f=simpleFun, dim=dim, iterations=iterations)
# timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_232a94a4.py) Sample output (depends on your hardware)```time taken for 1 iterations of simpleFun(10000, cpu): 23.74070time taken for 1 iterations of simpleFun(10000, cuda): 0.87535``` **Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 7: Getting Data')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples: {len(cifar10_data)}")
print(f"Class names: {cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print(f"Label: {cifar10_data.classes[label]}")
print(f"Image size: {image.shape}")
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
# plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_b04bd357.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 8: Train and Test')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 9: Data Augmentation - Transformations')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` **Note:** For the `seed_worker` to have an effect, `num_workers` should be 2 or more. We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: load the CIFAR10 data,
## but as grayscale images and not as RGB colored.
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
plt.show()
return data
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# data = my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_6052d728.py)*Example output:* --- Section 3: Neural Networks*Time estimate: ~1 hour 30 mins (excluding video)* Now it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 10: CSV Files')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in various formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
plt.show()
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 11: Generating the Neural Network')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:`__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.`forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.`predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.`train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:\n", X_samples)
## Do a forward pass of the network
# output = ...
# print("\nNetwork output:\n", output)
## Predict the label of each point
# y_predicted = ...
# print("\nPredicted labels:\n", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_af8ae0ff.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[ 0.1543, -0.8018], [ 2.2077, -2.9859], [-0.5745, -0.0195], [ 0.1924, -0.8367], [ 0.1818, -0.8301]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 1, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 12: Train the Network')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we will cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 13: Play with it')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 14: XOR Widget')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{array}{ccc}X & Y & \text{XOR} \\\hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\\end{array}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: Ethics And Course Info
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the [visualization of ICLR papers](https://iclr.cc/virtual/2021/paper_vis.html). Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? --- Submit to Airtable
###Code
# @title Video 18: Submission info
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This is Darryl, the Deep Learning Dapper Lion, and he's here to teach you about content submission to airtable. At the end of each tutorial there will be an Airtable Submission Cell. Run the cell to generate the airtable submission button and click on it to submit your information to airtable.if it is the last tutorial of the day your button will look like this and take you to the end of day survey: otherwise it look like this: It is critical that you push the submit button for every tutorial you run. even if you don't finish the tutorial, still submit!Submitting is the only way we can verify that you attempted each tutorial, which is critical for us to be able to track your progress. TL;DR: Basic tutorial workflow1. work through the tutorial, answering Think! questions and code exercises2. at end each tutorial, (even if tutorial incomplete) run the airtable submission code cell3. Push the submission button4. if the last tutorial of the day, Submission button will also take you to the end of the day survey on a new page. complete that and submit it. Submission FAQs: 1. What if I want to change my answers to previous discussion questions? > you are free to change and resubmit any of the answers and Think! questions as many times as you like. However, please only run the airtable submission code and click on the link once you are ready to submit.2. Okay, but what if I submitted my airtable anyway and reallly want to resubmit?> After making changes, you can re-run the airtable submission cell code cell. This will result in a second submission from you for the data. This will make Darryl sad as it will be more work for him to clean up the data later. 3. HELP! I accidentally ran the code to generate the airtable submission button before I was ready to submit! what do I do?> If you run the code to generate the link, anything that happens afterwards will not be captured. Complete the tutorial and make sure to re-run the airtable submission again when you are finished before pressing the submission button. 4. What if I want to work on this on my own later, should I wait to submit until I'm finished?> Please submit wherever you are at the end of the day. It's graet that you want to keep working on this, but it's important to see the places where we tried things that didn't quite work out, so we can fix them for next year. Finally, we try to keep the airtable code as hidden as possible, but if you ever see any calls to `atform` such as `atform.add_event()` in the coding exercises, just know that is for saving airtable information only. It will not affect the code that is being run around it in any way , so please do not modify, comment out, or worry about any of those lines of code.Now, lets try submitting today's course to Airtable by running the next cell and clicking the button when it appears.
###Code
# @title Airtable Submission Link
from IPython import display as IPyDisplay
IPyDisplay.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link to survey" style="width:410px"></a>
</div>""" )
###Output
_____no_output_____
###Markdown
--- Bonus - 60 years of Machine Learning Research in one Plotby [Hendrik Strobelt](http://hendrik.strobelt.com) (MIT-IBM Watson AI Lab) with support from Benjamin Hoover.In this notebook we visualize a subset* of 3,300 articles retreived from the AllenAI [S2ORC dataset](https://github.com/allenai/s2orc). We represent each paper by a position that is output of a dimensionality reduction method applied to a vector representation of each paper. The vector representation is output of a neural network.*The selection is very biased on the keywords and methodology we used to filter. Please see the details section to learn about what we did.
###Code
# @title Import `altair` and load the data
!pip install altair vega_datasets --quiet
import requests
import altair as alt # altair is defining data visualizations
# Source data files
# Position data file maps ID to x,y positions
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc.pos_umap_cosine_100_d0.1.json
POS_FILE = 'https://osf.io/qyrfn/download'
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc_clean.csv
# Metadata file maps ID to title, abstract, author,....
META_FILE = 'https://osf.io/vfdu6/download'
# data loading and wrangling
def load_data():
positions = pd.read_json(POS_FILE)
positions[['x', 'y']] = positions['pos'].to_list()
meta = pd.read_csv(META_FILE)
return positions.merge(meta, left_on='id', right_on='paper_id')
# load data
data = load_data()
# @title Define Visualization using ALtair
YEAR_PERIOD = "quinquennial" # @param
selection = alt.selection_multi(fields=[YEAR_PERIOD], bind='legend')
data[YEAR_PERIOD] = (data["year"] / 5.0).apply(np.floor) * 5
chart = alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count"]], width=800,
height=800).mark_circle(radius=2, opacity=0.2).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False, clamp=True, domain=list(range(1955,2020,5))),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
# size='citation_count',
# color="decade:O",
opacity=alt.condition(selection, alt.value(.8), alt.value(0.2)),
).add_selection(
selection
).interactive()
###Output
_____no_output_____
###Markdown
Lets look at the Visualization. Each dot represents one paper. Close dots mean that the respective papers are closer related than distant ones. The color indicates the 5-year period of when the paper was published. The dot size indicates the citation count (within S2ORC corpus) as of July 2020. The view is **interactive** and allows for three main interactions. Try them and play around.1. hover over a dot to see a tooltip (title, author)2. select a year in the legend (right) to filter dots2. zoom in/out with scroll -- double click resets view
###Code
chart
###Output
_____no_output_____
###Markdown
QuestionsBy playing around, can you find some answers to the follwing questions?1. Can you find topical clusters? What cluster might occur because of a filtering error?2. Can you see a temporal trend in the data and clusters?2. Can you determine when deep learning methods started booming ?3. Can you find the key papers that where written before the DL "winter" that define milestones for a cluster? (tipp: look for large dots of different color) MethodsHere is what we did:1. Filtering of all papers who fullfilled the criterria: - are categorized as `Computer Science` or `Mathematics` - one of the following keywords appearing in title or abstract: `"machine learning|artificial intelligence|neural network|(machine|computer) vision|perceptron|network architecture| RNN | CNN | LSTM | BLEU | MNIST | CIFAR |reinforcement learning|gradient descent| Imagenet "`2. per year, remove all papers that are below the 99 percentile of citation count in that year3. embed each paper by using abstract+title in SPECTER model4. project based on embedding using UMAP5. visualize using Altair Find Authors
###Code
# @title Edit the `AUTHOR_FILTER` variable to full text search for authors.
AUTHOR_FILTER = "Rush " # @param space at the end means "word border"
### Don't ignore case when searching...
FLAGS = 0
### uncomment do ignore case
# FLAGS = re.IGNORECASE
## --- FILTER CODE.. make it your own ---
import re
data['issel'] = data['authors'].str.contains(AUTHOR_FILTER, na=False, flags=FLAGS, )
if data['issel'].mean()<0.0000000001:
print('No match found')
## --- FROM HERE ON VIS CODE ---
alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count", "issel"]], width=800,
height=800) \
.mark_circle(stroke="black", strokeOpacity=1).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.StrokeWidth('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[0, 2]), legend=None),
alt.Opacity('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[.2, 1]), legend=None),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
).interactive()
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Deepak Raya, Siwei Bai, Kelson Shilling-Scrivo__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting. In some tutorials, you will notice that we install some dependencies even if they are preinstalled on google colab or kaggle. This happens because we have added automation to our repository through [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions).Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D1_T1','https://portal.neuromatchacademy.org/api/redirect/to/97e94a29-0b3a-4e16-9a8d-f6838a5bd83d')
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
# TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20, 21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20, 21)} (shape: {torch.ones(20, 21).shape})")
if not np.array_equal( B.numpy(),np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device='cpu'):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
if device == 'cpu':
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
else:
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Google Colab users***Scratch Code Cells*If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course*Time estimate: ~25mins*
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing
atform.add_event('Video 1: Welcome and History')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 2: Why DL is cool')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch*Time estimate: ~2 hours 05 mins* PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 3: Making Tensors')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print(f"Tensor x: {x}")
print(f"Tensor y: {y}")
print(f"Tensor z: {z}")
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
print(f"Tensor d: {d}")
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `True`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the cell multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# add timing to airtable
atform.add_event('Coding Exercise 2.1: Creating Tensors')
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ad4f6c0f.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 4: Tensor Operators')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.T```. Note the lack of brackets for ```Tensor.T``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-simple_operations')
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_5562ea1d.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-dot_product')
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_00491ea4.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 5: Tensor Indexing')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easily mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see ```.transpose()``` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar, e.g.,:$ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix} \,$and$ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.,:$ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.,:$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix} \,$and $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix} \, $so$ \, Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$and$ \, E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix} \,$so$ \, Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** `torch.numel()` is an easy way of finding the number of elements in a tensor
###Code
def functionA(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`
and returns the column sum of
`my_tensor1` multiplied by the sum of all the elmements of `my_tensor2`,
i.e., a scalar.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `my_tensor1` by the sum of
`my_tensor2`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(my_tensor):
"""
This function takes in a square matrix `my_tensor` and returns a 2D tensor
consisting of a flattened `my_tensor` with the index of each element
appended to this tensor in the row dimension.
Args:
my_tensor: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor `my_tensor`
my_tensor = ...
# TODO create the idx tensor to be concatenated to `my_tensor`
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`.
If the dimensions allow it, it returns the
elementwise sum of `my_tensor1`-shaped `my_tensor2`, and `my_tensor2`;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape `my_tensor2` into the shape of `my_tensor1`
if ...:
# TODO reshape `my_tensor2` into the shape of `my_tensor1`
my_tensor2 = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
my_tensor1 = ...
my_tensor2 = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
# add timing to airtable
atform.add_event('Coding Exercise 2.3: Manipulating Tensors')
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ea1718cb.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([ 1, -1, -1, 3, 2, 3, 0])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 6: GPU vs CPU')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Here, we define the function `set_device`, which returns the device use in the notebook, i.e., `cpu` or `cuda`. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as```pythonDEVICE = set_device()```Let's define the function using the PyTorch package `torch.cuda`, which is lazily initialized, so we can always import it, and use `is_available()` to determine if our system supports CUDA.
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
###Output
_____no_output_____
###Markdown
Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the `.to()` method as before, or the `.cpu()` and `.cuda()` methods. Note that using the `.cuda()` will throw an error if CUDA is not enabled in your machine.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call.
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function `simpleFun`. Complete this function, such that it performs the operations:- elementwise multiplication- matrix multiplicationThe operations should be able to perfomed on either the CPU or GPU specified by the parameter `device`. We will use the helper function `timeFun(f, dim, iterations, device)`.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the function, but
## ensure all computations happens on the `device`
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
x = ...
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
y = ...
# 2D tensor filled with the scalar value 2, dim x dim
z = ...
# elementwise multiplication of x and y
a = ...
# matrix multiplication of x and y
b = ...
del x
del y
del z
del a
del b
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(f=simpleFun, dim=dim, iterations=iterations)
# timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_232a94a4.py) Sample output (depends on your hardware)```time taken for 1 iterations of simpleFun(10000, cpu): 23.74070time taken for 1 iterations of simpleFun(10000, cuda): 0.87535``` **Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 7: Getting Data')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples: {len(cifar10_data)}")
print(f"Class names: {cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print(f"Label: {cifar10_data.classes[label]}")
print(f"Image size: {image.shape}")
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
# plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_b04bd357.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 8: Train and Test')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 9: Data Augmentation - Transformations')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` **Note:** For the `seed_worker` to have an effect, `num_workers` should be 2 or more. We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: load the CIFAR10 data,
## but as grayscale images and not as RGB colored.
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
plt.show()
return data
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# data = my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_6052d728.py)*Example output:* --- Section 3: Neural Networks*Time estimate: ~1 hour 30 mins (excluding video)* Now it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 10: CSV Files')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in various formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
plt.show()
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 11: Generating the Neural Network')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:`__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.`forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.`predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.`train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:\n", X_samples)
## Do a forward pass of the network
# output = ...
# print("\nNetwork output:\n", output)
## Predict the label of each point
# y_predicted = ...
# print("\nPredicted labels:\n", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_af8ae0ff.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[ 0.1543, -0.8018], [ 2.2077, -2.9859], [-0.5745, -0.0195], [ 0.1924, -0.8367], [ 0.1818, -0.8301]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 1, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 12: Train the Network')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we will cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 13: Play with it')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 14: XOR Widget')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{array}{ccc}X & Y & \text{XOR} \\\hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\\end{array}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: Ethics And Course Info
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the [visualization of ICLR papers](https://iclr.cc/virtual/2021/paper_vis.html). Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? --- Submit to Airtable
###Code
# @title Video 18: Submission info
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This is Darryl, the Deep Learning Dapper Lion, and he's here to teach you about content submission to airtable. At the end of each tutorial there will be an Airtable Submission Cell. Run the cell to generate the airtable submission button and click on it to submit your information to airtable.if it is the last tutorial of the day your button will look like this and take you to the end of day survey: otherwise it look like this: It is critical that you push the submit button for every tutorial you run. even if you don't finish the tutorial, still submit!Submitting is the only way we can verify that you attempted each tutorial, which is critical for us to be able to track your progress. TL;DR: Basic tutorial workflow1. work through the tutorial, answering Think! questions and code exercises2. at end each tutorial, (even if tutorial incomplete) run the airtable submission code cell3. Push the submission button4. if the last tutorial of the day, Submission button will also take you to the end of the day survey on a new page. complete that and submit it. Submission FAQs: 1. What if I want to change my answers to previous discussion questions? > you are free to change and resubmit any of the answers and Think! questions as many times as you like. However, please only run the airtable submission code and click on the link once you are ready to submit.2. Okay, but what if I submitted my airtable anyway and reallly want to resubmit?> After making changes, you can re-run the airtable submission cell code cell. This will result in a second submission from you for the data. This will make Darryl sad as it will be more work for him to clean up the data later. 3. HELP! I accidentally ran the code to generate the airtable submission button before I was ready to submit! what do I do?> If you run the code to generate the link, anything that happens afterwards will not be captured. Complete the tutorial and make sure to re-run the airtable submission again when you are finished before pressing the submission button. 4. What if I want to work on this on my own later, should I wait to submit until I'm finished?> Please submit wherever you are at the end of the day. It's graet that you want to keep working on this, but it's important to see the places where we tried things that didn't quite work out, so we can fix them for next year. Finally, we try to keep the airtable code as hidden as possible, but if you ever see any calls to `atform` such as `atform.add_event()` in the coding exercises, just know that is for saving airtable information only. It will not affect the code that is being run around it in any way , so please do not modify, comment out, or worry about any of those lines of code.Now, lets try submitting today's course to Airtable by running the next cell and clicking the button when it appears.
###Code
# @title Airtable Submission Link
from IPython import display as IPyDisplay
IPyDisplay.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link to survey" style="width:410px"></a>
</div>""" )
###Output
_____no_output_____
###Markdown
--- Bonus - 60 years of Machine Learning Research in one Plotby [Hendrik Strobelt](http://hendrik.strobelt.com) (MIT-IBM Watson AI Lab) with support from Benjamin Hoover.In this notebook we visualize a subset* of 3,300 articles retreived from the AllenAI [S2ORC dataset](https://github.com/allenai/s2orc). We represent each paper by a position that is output of a dimensionality reduction method applied to a vector representation of each paper. The vector representation is output of a neural network.*The selection is very biased on the keywords and methodology we used to filter. Please see the details section to learn about what we did.
###Code
# @title Import `altair` and load the data
!pip install altair vega_datasets --quiet
import requests
import altair as alt # altair is defining data visualizations
# Source data files
# Position data file maps ID to x,y positions
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc.pos_umap_cosine_100_d0.1.json
POS_FILE = 'https://osf.io/qyrfn/download'
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc_clean.csv
# Metadata file maps ID to title, abstract, author,....
META_FILE = 'https://osf.io/vfdu6/download'
# data loading and wrangling
def load_data():
positions = pd.read_json(POS_FILE)
positions[['x', 'y']] = positions['pos'].to_list()
meta = pd.read_csv(META_FILE)
return positions.merge(meta, left_on='id', right_on='paper_id')
# load data
data = load_data()
# @title Define Visualization using ALtair
YEAR_PERIOD = "quinquennial" # @param
selection = alt.selection_multi(fields=[YEAR_PERIOD], bind='legend')
data[YEAR_PERIOD] = (data["year"] / 5.0).apply(np.floor) * 5
chart = alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count"]], width=800,
height=800).mark_circle(radius=2, opacity=0.2).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False, clamp=True, domain=list(range(1955,2020,5))),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
# size='citation_count',
# color="decade:O",
opacity=alt.condition(selection, alt.value(.8), alt.value(0.2)),
).add_selection(
selection
).interactive()
###Output
_____no_output_____
###Markdown
Lets look at the Visualization. Each dot represents one paper. Close dots mean that the respective papers are closer related than distant ones. The color indicates the 5-year period of when the paper was published. The dot size indicates the citation count (within S2ORC corpus) as of July 2020. The view is **interactive** and allows for three main interactions. Try them and play around.1. hover over a dot to see a tooltip (title, author)2. select a year in the legend (right) to filter dots2. zoom in/out with scroll -- double click resets view
###Code
chart
###Output
_____no_output_____
###Markdown
QuestionsBy playing around, can you find some answers to the follwing questions?1. Can you find topical clusters? What cluster might occur because of a filtering error?2. Can you see a temporal trend in the data and clusters?2. Can you determine when deep learning methods started booming ?3. Can you find the key papers that where written before the DL "winter" that define milestones for a cluster? (tipp: look for large dots of different color) MethodsHere is what we did:1. Filtering of all papers who fullfilled the criterria: - are categorized as `Computer Science` or `Mathematics` - one of the following keywords appearing in title or abstract: `"machine learning|artificial intelligence|neural network|(machine|computer) vision|perceptron|network architecture| RNN | CNN | LSTM | BLEU | MNIST | CIFAR |reinforcement learning|gradient descent| Imagenet "`2. per year, remove all papers that are below the 99 percentile of citation count in that year3. embed each paper by using abstract+title in SPECTER model4. project based on embedding using UMAP5. visualize using Altair Find Authors
###Code
# @title Edit the `AUTHOR_FILTER` variable to full text search for authors.
AUTHOR_FILTER = "Rush " # @param space at the end means "word border"
### Don't ignore case when searching...
FLAGS = 0
### uncomment do ignore case
# FLAGS = re.IGNORECASE
## --- FILTER CODE.. make it your own ---
import re
data['issel'] = data['authors'].str.contains(AUTHOR_FILTER, na=False, flags=FLAGS, )
if data['issel'].mean()<0.0000000001:
print('No match found')
## --- FROM HERE ON VIS CODE ---
alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count", "issel"]], width=800,
height=800) \
.mark_circle(stroke="black", strokeOpacity=1).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.StrokeWidth('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[0, 2]), legend=None),
alt.Opacity('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[.2, 1]), legend=None),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
).interactive()
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Kelson Shilling-Scrivo, Deepak Raya, Siwei Bai__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting. In some tutorials, you will notice that we install some dependencies even if they are preinstalled on google colab or kaggle. This happens because we have added automation to our repository through [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions).Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from evaltools.airtable import AirtableForm
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D1_T1','https://portal.neuromatchacademy.org/api/redirect/to/97e94a29-0b3a-4e16-9a8d-f6838a5bd83d')
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
# TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20, 21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20, 21)} (shape: {torch.ones(20, 21).shape})")
if not np.array_equal( B.numpy(),np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device='cpu'):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
print(f"time taken for {iterations} iterations of {f.__name__}({dim}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Google Colab users***Scratch Code Cells*If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing
atform.add_event('Video 1: Welcome and History')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 2: Why DL is cool')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 3: Making Tensors')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print(f"Tensor x: {x}")
print(f"Tensor y: {y}")
print(f"Tensor z: {z}")
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
print(f"Tensor d: {d}")
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `True`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the cell multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# add timing to airtable
atform.add_event('Coding Exercise 2.1: Creating Tensors')
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ad4f6c0f.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 4: Tensor Operators')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.t```. Note the lack of brackets for ```Tensor.t``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-simple_operations')
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_5562ea1d.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-dot_product')
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_00491ea4.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 5: Tensor Indexing')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easily mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see ```.transpose()``` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar, e.g.,:$ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix} \,$and$ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.,:$ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.,:$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix} \,$and $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix} \, $so$ \, Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$and$ \, E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix} \,$so$ \, Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** `torch.numel()` is an easy way of finding the number of elements in a tensor
###Code
def functionA(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`
and returns the column sum of
`my_tensor1` multiplied by the sum of all the elmements of `my_tensor2`,
i.e., a scalar.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `my_tensor1` by the sum of
`my_tensor2`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(my_tensor):
"""
This function takes in a square matrix `my_tensor` and returns a 2D tensor
consisting of a flattened `my_tensor` with the index of each element
appended to this tensor in the row dimension.
Args:
my_tensor: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor `my_tensor`
my_tensor = ...
# TODO create the idx tensor to be concatenated to `my_tensor`
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`.
If the dimensions allow it, it returns the
elementwise sum of `my_tensor1`-shaped `my_tensor2`, and `my_tensor2`;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape `my_tensor2` into the shape of `my_tensor1`
if ...:
# TODO reshape `my_tensor2` into the shape of `my_tensor1`
my_tensor2 = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
my_tensor1 = ...
my_tensor2 = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
# add timing to airtable
atform.add_event('Coding Exercise 2.3: Manipulating Tensors')
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ea1718cb.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([ 1, -1, -1, 3, 2, 3, 0])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 6: GPU vs CPU')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Here, we define the function `set_device`, which returns the device use in the notebook, i.e., `cpu` or `cuda`. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as```pythonDEVICE = set_device()```Let's define the function using the PyTorch package `torch.cuda`, which is lazily initialized, so we can always import it, and use `is_available()` to determine if our system supports CUDA.
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
###Output
_____no_output_____
###Markdown
Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the `.to()` method as before, or the `.cpu()` and `.cuda()` methods. Note that using the `.cuda()` will throw an error if CUDA is not enabled in your machine.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call.
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function. Complete the second function, such that it is performs the same operations as the first function, but entirely on the GPU. We will use the helper function `timeFun(f, dim, iterations, device)`.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda:0"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
x = ...
y = ...
z = ...
x = ...
y = ...
del x
del y
del z
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(f=simpleFun, dim=dim, iterations=iterations)
# timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_032dcba8.py) Sample output (depends on your hardware)```time taken for 1 iterations of simpleFun(10000): 28.50481time taken for 1 iterations of simpleFunGPU(10000): 0.91102``` **Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 7: Getting Data')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples: {len(cifar10_data)}")
print(f"Class names: {cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print(f"Label: {cifar10_data.classes[label]}")
print(f"Image size: {image.shape}")
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
# plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_b04bd357.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 8: Train and Test')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 9: Data Augmentation - Transformations')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` **Note:** For the `seed_worker` to have an effect, `num_workers` should be 2 or more. We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
plt.show()
return data
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# data = my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_6052d728.py)*Example output:* --- Section 3: Neural NetworksNow it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 10: CSV Files')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in various formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
plt.show()
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 11: Generating the Neural Network')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:`__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.`forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.`predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.`train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:\n", X_samples)
## Do a forward pass of the network
# output = ...
# print("\nNetwork output:\n", output)
## Predict the label of each point
# y_predicted = ...
# print("\nPredicted labels:\n", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_af8ae0ff.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[ 0.1543, -0.8018], [ 2.2077, -2.9859], [-0.5745, -0.0195], [ 0.1924, -0.8367], [ 0.1818, -0.8301]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 1, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 12: Train the Network')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we wil cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 13: Play with it')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 14: XOR Widget')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{array}{ccc}X & Y & \text{XOR} \\\hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\\end{array}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: Ethics And Course Info
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the [visualization of ICLR papers](https://iclr.cc/virtual/2021/paper_vis.html). Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? --- Submit to Airtable
###Code
# @title Video 18: Submission info
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This is Darryl, the Deep Learning Dapper Lion, and he's here to teach you about content submission to airtable. At the end of each tutorial there will be an Airtable Submission Cell. Run the cell to generate the airtable submission button and click on it to submit your information to airtable.if it is the last tutorial of the day your button will look like this and take you to the end of day survey: otherwise it look like this: It is critical that you push the submit button for every tutorial you run. even if you don't finish the tutorial, still submit!Submitting is the only way we can verify that you attempted each tutorial, which is critical for the award of your completion certificate at the end of the course.Finally, we try to keep the airtable code as hidden as possible, but if you ever see any calls to `atform` such as `atform.add_event()` in the coding exercises, just know that is for saving airtable information only. It will not affect the code that is being run around it in any way , so please do not modify, comment out, or worry about any of those lines of code.Now, lets try submitting today's course to Airtable by running the next cell and clicking the button when it appears.
###Code
# @title Airtable Submission Link
from IPython import display
display.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link to survey" style="width:410px"></a>
</div>""" )
###Output
_____no_output_____
###Markdown
--- Bonus - 60 years of Machine Learning Research in one Plotby [Hendrik Strobelt](http://hendrik.strobelt.com) (MIT-IBM Watson AI Lab) with support from Benjamin Hoover.In this notebook we visualize a subset* of 3,300 articles retreived from the AllenAI [S2ORC dataset](https://github.com/allenai/s2orc). We represent each paper by a position that is output of a dimensionality reduction method applied to a vector representation of each paper. The vector representation is output of a neural network.*The selection is very biased on the keywords and methodology we used to filter. Please see the details section to learn about what we did.
###Code
# @title Import `altair` and load the data
!pip install altair vega_datasets --quiet
import altair as alt # altair is defining data visualizations
# Source data files
# Position data file maps ID to x,y positions
POS_FILE = 'http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc.pos_umap_cosine_100_d0.1.json'
# Metadata file maps ID to title, abstract, author,....
META_FILE = 'http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc_clean.csv'
# data loading and wrangling
def load_data():
positions = pd.read_json(POS_FILE)
positions[['x', 'y']] = positions['pos'].to_list()
meta = pd.read_csv(META_FILE)
return positions.merge(meta, left_on='id', right_on='paper_id')
# load data
data = load_data()
# @title Define Visualization using ALtair
YEAR_PERIOD = "quinquennial" # @param
selection = alt.selection_multi(fields=[YEAR_PERIOD], bind='legend')
data[YEAR_PERIOD] = (data["year"] / 5.0).apply(np.floor) * 5
chart = alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count"]], width=800,
height=800).mark_circle(radius=2, opacity=0.2).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False, clamp=True, domain=list(range(1955,2020,5))),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
# size='citation_count',
# color="decade:O",
opacity=alt.condition(selection, alt.value(.8), alt.value(0.2)),
).add_selection(
selection
).interactive()
###Output
_____no_output_____
###Markdown
Lets look at the Visualization. Each dot represents one paper. Close dots mean that the respective papers are closer related than distant ones. The color indicates the 5-year period of when the paper was published. The dot size indicates the citation count (within S2ORC corpus) as of July 2020. The view is **interactive** and allows for three main interactions. Try them and play around.1. hover over a dot to see a tooltip (title, author)2. select a year in the legend (right) to filter dots2. zoom in/out with scroll -- double click resets view
###Code
chart
###Output
_____no_output_____
###Markdown
QuestionsBy playing around, can you find some answers to the follwing questions?1. Can you find topical clusters? What cluster might occur because of a filtering error?2. Can you see a temporal trend in the data and clusters?2. Can you determine when deep learning methods started booming ?3. Can you find the key papers that where written before the DL "winter" that define milestones for a cluster? (tipp: look for large dots of different color) MethodsHere is what we did:1. Filtering of all papers who fullfilled the criterria: - are categorized as `Computer Science` or `Mathematics` - one of the following keywords appearing in title or abstract: `"machine learning|artificial intelligence|neural network|(machine|computer) vision|perceptron|network architecture| RNN | CNN | LSTM | BLEU | MNIST | CIFAR |reinforcement learning|gradient descent| Imagenet "`2. per year, remove all papers that are below the 99 percentile of citation count in that year3. embed each paper by using abstract+title in SPECTER model4. project based on embedding using UMAP5. visualize using Altair Find Authors
###Code
# @title Edit the `AUTHOR_FILTER` variable to full text search for authors.
AUTHOR_FILTER = "Rush " # @param space at the end means "word border"
### Don't ignore case when searching...
FLAGS = 0
### uncomment do ignore case
# FLAGS = re.IGNORECASE
## --- FILTER CODE.. make it your own ---
import re
data['issel'] = data['authors'].str.contains(AUTHOR_FILTER, na=False, flags=FLAGS, )
if data['issel'].mean()<0.0000000001:
print('No match found')
## --- FROM HERE ON VIS CODE ---
alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count", "issel"]], width=800,
height=800) \
.mark_circle(stroke="black", strokeOpacity=1).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.StrokeWidth('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[0, 2]), legend=None),
alt.Opacity('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[.2, 1]), legend=None),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
).interactive()
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Kelson Shilling-Scrivo, Deepak Raya, Siwei Bai__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting.Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install -U scikit-learn --quiet
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
#TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20,21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20,21)} (shape: {torch.ones(20,21).shape})")
if not np.array_equal( B.numpy(),np.vander([1,2,3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20,21):
errors.append("C is not the correct shape ")
if not torch.equal(D,torch.arange(4,41,step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
print(f"time taken for {iterations} iterations of {f.__name__}({dim}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Scratch Code Cells**If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print("Tensor a:", a)
print("Tensor b:", b)
print("Tensor c:", c)
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print("Tensor x:", x)
print("Tensor y:", y)
print("Tensor z:", z)
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print("Tensor a: ", a)
print("Tensor b: ", b)
print("Tensor c: ", c)
print("Tensor d: ", d)
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `False`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the celll multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_d99622ef.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.t```. Note the lack of brackets for ```Tensor.t``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_51c270eb.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_2a69ad55.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easilly mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see ```.transpose()``` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar, e.g.: $ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix}$ $ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix}$$ Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.: $ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix}$ $ Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.: $ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$ $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix}$ $ Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$ $ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$ $ E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix}$ $ Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** ```torch.numel()``` is an easy way of finding the number of elements in a tensor
###Code
def functionA(A, B):
"""
This function takes in two 2D tensors A and B and returns the column sum of
A multiplied by the sum of all the elmements of B, i.e., a scalar.
Args:
A: torch.Tensor
B: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `A` by the sum of `B`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(C):
"""
This function takes in a square matrix C and returns a 2D tensor consisting of
a flattened C with the index of each element appended to this tensor in the
row dimension.
Args:
C: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor C
C = ...
# TODO create the idx tensor to be concatenated to C
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(D, E):
"""
This function takes in two 2D tensors D and E . If the dimensions allow it,
this function returns the elementwise sum of D-shaped E, and D;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
D: torch.Tensor
E: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape E into the shape of D
if ...:
# TODO reshape E into the shape of D
E = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
D = ...
E = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_524e1dab.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([[ 1, -1, -1, 3, 2, 3, 0]])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Let's make some CUDA tensors!
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("WARNING: For this notebook to perform best, "
"if possible, in the menu under `Runtime` -> "
"`Change runtime type.` select `GPU` ")
else:
print("GPU is enabled in this notebook.")
return device
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the ```.to()``` method as before, or the ```.cpu()``` and ```.cuda()``` methods.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function. Complete the second function, such that it is performs the same operations as the first function, but entirely on the GPU.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device='cpu'):
"""
Args:
dim: integer
Returns:
Nothing.
"""
x = torch.rand(dim, dim)
y = torch.rand_like(x)
z = 2*torch.ones(dim, dim)
x = x * y
x = x @ z
# garbage collection
del x
del y
del z
def simpleFunGPU(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
x = ...
y = ...
z = ...
x = ...
y = ...
del x
del y
del z
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(simpleFun, dim=dim, iterations=iterations, device=DEVICE)
# timeFun(simpleFunGPU, dim=dim, iterations=iterations, device=DEVICE)
# to remove solution
def simpleFunGPU(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda"
Returns:
Nothing.
"""
x = torch.rand(dim,dim).to(device)
y = torch.rand_like(x).to(device)
z = 2*torch.ones(dim,dim).to(device)
x = x * y
x = x @ z
del x
del y
del z
## TODO: Implement the function above and uncomment the following lines to test your code
timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
timeFun(f=simpleFunGPU, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
**Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples:{len(cifar10_data)}")
print(f"Class names:{cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print('Label:', cifar10_data.classes[label])
print('Image size:', image.shape)
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_69b74721.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_8a7b1b66.py)*Example output:* --- Section 3: Neural NetworksNow it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in varios formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:`__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.`forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.`predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.`train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:", X_samples)
## Do a forward pass of the network
# output = ...
# print("Network output:", output)
## Predict the label of each point
# y_predicted = ...
# print("Predicted labels:", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_49a61fb7.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[-0.3032, -0.5563], [-0.1419, -0.3195], [-0.2879, -0.6030], [-0.2665, -0.4831], [-0.2973, -0.5369]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 0, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we wil cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{array}{ccc}X & Y & \text{XOR} \\\hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\\end{array}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: EthicsLet us watch the coded bias movie together and discuss
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
--- Bonus
###Code
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: It's a wrap!
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 18: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Kelson Shilling-Scrivo, Deepak Raya__Content editors:__ Anoop Kulkarni__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
#@markdown Tutorial slides
# you should link the slides for all tutorial videos here (we will store pdfs on osf)
from IPython.display import HTML
HTML('<iframe src="https://docs.google.com/presentation/d/1x_619dh5wCJbPiG3Ix2TFLavWstWYC1JFtAeMGmmUBI/embed?start=false&loop=false&delayms=3000" frameborder="0" width="960" height="569" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>')
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting.Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
#@title Install dependencies
!pip install pandas --quiet
!pip install -U scikit-learn --quiet
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
#@title Figure Settings
import ipywidgets as widgets
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
#@title Helper Functions
def checkExercise1(A: torch.Tensor, B: torch.Tensor,
C:torch.Tensor, D:torch.Tensor):
errors = []
# TODO better errors
if not torch.equal(A, torch.ones(20, 21)):
errors.append("A is not a 20 by 21 tensor of ones ")
if not np.array_equal(B.numpy(), np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
print(errors)
def timeFun(f, iterations):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f()
end = time.time()
t_total += end - start
print(f"time taken for {iterations} iterations of {f.__name__}: {t_total}")
###Output
_____no_output_____
###Markdown
**Scratch Code Cells**If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course
###Code
#@title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing). **Describe what you hope to get out of this course in about 100 words.**
###Code
#@title Video 2: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the visualization of ICLR papers. Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? Section 2: The Basics of PyTorch PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
#@title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0,1,2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print("Tensor a:", a)
print("Tensor b:", b)
print("Tensor c:", c)
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1,5)
print("Tensor x:", x)
print("Tensor y:", y)
print("Tensor z:", z)
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print("Tensor a: ", a)
print("Tensor b: ", b)
print("Tensor c: ", c)
print("Tensor d: ", d)
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `False`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the celll multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_14ccf2de.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
#@title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
torch.add(a, b, out=c)
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1.0, 2, 4, 8])
y = torch.tensor([2, 2, 2, 2])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```.torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.t```. Note the lack of brackets for ```Tensor.t``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
# Computing expression 1:
def simple_operations(a1):
################################################
## TODO for students: create the a2 and a3 matrices
## from the first expression
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
a2 = ...
a3 = ...
answer = ...
return answer
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
# A = simple_operations(a1)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_cfc503e1.py) ```tensor([[20, 24], [31, 27]])```
###Code
# Computing expression 2:
def dot_product():
###############################################
## TODO for students: create the b1 and b2 matrices
## from the second expression
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
b1 = ...
b2 = ...
product = ...
return product
## Uncomment below to test your function
# b = dot_product()
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_2fe14726.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
#@title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easilly mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
**Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar. e.g: $ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix}$ $ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix}$$ Out = 12 * \begin{bmatrix}2 & 2\\\end{bmatrix} = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension. e.g: $ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix}$ $ Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C (maybe cut this depending on time constraints)**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $E$ reshaped into the dimensions of $D$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors. e.g. $ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$ $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix}$ $ Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$ $ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$ $ E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix}$ $ Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** ```torch.numel()``` is an easy way of finding the number of elements in a tensor
###Code
def functionA(A: torch.Tensor, B: torch.Tensor) -> torch.Tensor:
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(C: torch.Tensor) -> torch.Tensor:
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor C
C = ...
# TODO create the idx tensor to be concatenated to C
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(D: torch.Tensor, E: torch.Tensor) -> torch.Tensor:
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape E into the shape of D
if ...:
# TODO reshape E into the shape of D
E = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
D = ...
E = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_3aed9202.py) ```tensor([24, 24])tensor([0.])tensor([[ 3, 2], [-1, 5]])tensor([[ 1, -1, -1, 3, 2, 3, 0]])``` Section 2.4: GPUs
###Code
#@title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will note have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Let's make some CUDA tensors!
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("WARNING: For this notebook to perform best, "
"if possible, in the menu under `Runtime` -> "
"`Change runtime type.` select `GPU` ")
else:
print("GPU is enabled in this notebook.")
return device
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
device = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=device)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2,2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(device)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device="cuda")
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the ```.to()``` method as before, or the ```.cpu()``` and ```.cuda()``` methods.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call
###Code
x = torch.tensor([0, 1, 2], device="cuda")
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device="cuda")
# moving to cpu
x = x.cpu()
print(x + y)
# moving to gpu
y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function. Complete the second function, such that it is performs the same operations as the first function, but entirely on the GPU.
###Code
def simpleFun():
x = torch.rand(10000, 10000)
y = torch.rand_like(x)
z = 2*torch.ones(10000, 10000)
x = x * y
x = x @ z
print(simpleFun())
def simpleFunGPU():
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
x = ...
y = ...
z = ...
x = ...
y = ...
## Implement the function above and uncomment the following lines to test your code
# timeFun(simpleFun, iterations=1)
# timeFun(simpleFunGPU, iterations=1)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_3092cbb6.py) Section 2.5: Datasets and Dataloaders
###Code
#@title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples:{len(cifar10_data)}")
print(f"Class names:{cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print('Label:', cifar10_data.classes[label])
print('Image size:', image.shape)
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - H × W × C.You need to reorder the dimensions of the tensor using the `permute` method of the tensor.
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_a5fb9be3.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
#@title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g = torch.Generator()g.manual_seed(0)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g )``` We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch (B) and each image has 3 dimensions: channels (C), height (H) and width (W). So, the size of the 4D tensor is B × C × H × W.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images.
###Code
def my_data_load():
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_a11c8f71.py)*Example output:* Section 3: Neural NetworksNow it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
#@title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
#@title Generate sample data
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in varios formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
device = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(device)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(device)
print(f"Size y:{y.shape}")
#@title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural NetworkFor this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods: `__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc. `forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it. `predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score. `train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU)
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(device)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.1: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
# X_samples = ...
# print("Sample input:", X_samples)
## Do a forward pass of the network
# output = ...
# print("Network output:", output)
## Predict the label of each point
# y_predicted = ...
# print("Predicted labels:", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_2eef6658.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[-0.3032, -0.5563], [-0.1419, -0.3195], [-0.2879, -0.6030], [-0.2665, -0.4831], [-0.2973, -0.5369]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 0, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
#@title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we wil cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
#@title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, device)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(device)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
#@title Visualize the training process
#@markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
#@title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.2: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
#@title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cnu7pyRx_u0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Interactive Demo: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
#@markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
#@markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'No' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
---Section 4: EthicsLet us watch the coded bias movie together and discuss
###Code
#@title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
--- Bonus
###Code
#@title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
#@title Video 17: It's a wrap!
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Deepak Raya, Siwei Bai, Kelson Shilling-Scrivo__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting. In some tutorials, you will notice that we install some dependencies even if they are preinstalled on google colab or kaggle. This happens because we have added automation to our repository through [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions).Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D1_T1','https://portal.neuromatchacademy.org/api/redirect/to/97e94a29-0b3a-4e16-9a8d-f6838a5bd83d')
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
# TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20, 21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20, 21)} (shape: {torch.ones(20, 21).shape})")
if not np.array_equal( B.numpy(),np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device='cpu'):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
if device == 'cpu':
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
else:
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Google Colab users***Scratch Code Cells*If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course*Time estimate: ~25mins*
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing
atform.add_event('Video 1: Welcome and History')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 2: Why DL is cool')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch*Time estimate: ~2 hours 05 mins* PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 3: Making Tensors')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print(f"Tensor x: {x}")
print(f"Tensor y: {y}")
print(f"Tensor z: {z}")
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
print(f"Tensor d: {d}")
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `True`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the cell multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# add timing to airtable
atform.add_event('Coding Exercise 2.1: Creating Tensors')
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ad4f6c0f.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 4: Tensor Operators')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.t```. Note the lack of brackets for ```Tensor.t``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-simple_operations')
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_5562ea1d.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-dot_product')
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_00491ea4.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 5: Tensor Indexing')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easily mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see ```.transpose()``` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar, e.g.,:$ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix} \,$and$ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.,:$ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.,:$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix} \,$and $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix} \, $so$ \, Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$and$ \, E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix} \,$so$ \, Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** `torch.numel()` is an easy way of finding the number of elements in a tensor
###Code
def functionA(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`
and returns the column sum of
`my_tensor1` multiplied by the sum of all the elmements of `my_tensor2`,
i.e., a scalar.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `my_tensor1` by the sum of
`my_tensor2`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(my_tensor):
"""
This function takes in a square matrix `my_tensor` and returns a 2D tensor
consisting of a flattened `my_tensor` with the index of each element
appended to this tensor in the row dimension.
Args:
my_tensor: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor `my_tensor`
my_tensor = ...
# TODO create the idx tensor to be concatenated to `my_tensor`
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`.
If the dimensions allow it, it returns the
elementwise sum of `my_tensor1`-shaped `my_tensor2`, and `my_tensor2`;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape `my_tensor2` into the shape of `my_tensor1`
if ...:
# TODO reshape `my_tensor2` into the shape of `my_tensor1`
my_tensor2 = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
my_tensor1 = ...
my_tensor2 = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
# add timing to airtable
atform.add_event('Coding Exercise 2.3: Manipulating Tensors')
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ea1718cb.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([ 1, -1, -1, 3, 2, 3, 0])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 6: GPU vs CPU')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Here, we define the function `set_device`, which returns the device use in the notebook, i.e., `cpu` or `cuda`. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as```pythonDEVICE = set_device()```Let's define the function using the PyTorch package `torch.cuda`, which is lazily initialized, so we can always import it, and use `is_available()` to determine if our system supports CUDA.
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
###Output
_____no_output_____
###Markdown
Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the `.to()` method as before, or the `.cpu()` and `.cuda()` methods. Note that using the `.cuda()` will throw an error if CUDA is not enabled in your machine.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call.
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function `simpleFun`. Complete this function, such that it performs the operations:- elementwise multiplication- matrix multiplicationThe operations should be able to perfomed on either the CPU or GPU specified by the parameter `device`. We will use the helper function `timeFun(f, dim, iterations, device)`.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda:0"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
x = ...
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
y = ...
# 2D tensor filled with the scalar value 2, dim x dim
z = ...
# elementwise multiplication of x and y
a = ...
# matrix multiplication of x and y
b = ...
del x
del y
del z
del a
del b
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(f=simpleFun, dim=dim, iterations=iterations)
# timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_232a94a4.py) Sample output (depends on your hardware)```time taken for 1 iterations of simpleFun(10000): 28.50481time taken for 1 iterations of simpleFun(10000): 0.91102``` **Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 7: Getting Data')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples: {len(cifar10_data)}")
print(f"Class names: {cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print(f"Label: {cifar10_data.classes[label]}")
print(f"Image size: {image.shape}")
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
# plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_b04bd357.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 8: Train and Test')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 9: Data Augmentation - Transformations')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` **Note:** For the `seed_worker` to have an effect, `num_workers` should be 2 or more. We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: recreate the above function, but
## ensure all computation happens on the GPU
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
plt.show()
return data
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# data = my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_6052d728.py)*Example output:* --- Section 3: Neural Networks*Time estimate: ~1 hour 30 mins (excluding movie)* Now it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 10: CSV Files')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in various formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
plt.show()
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 11: Generating the Neural Network')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:`__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.`forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.`predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.`train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:\n", X_samples)
## Do a forward pass of the network
# output = ...
# print("\nNetwork output:\n", output)
## Predict the label of each point
# y_predicted = ...
# print("\nPredicted labels:\n", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_af8ae0ff.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[ 0.1543, -0.8018], [ 2.2077, -2.9859], [-0.5745, -0.0195], [ 0.1924, -0.8367], [ 0.1818, -0.8301]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 1, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 12: Train the Network')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we wil cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 13: Play with it')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 14: XOR Widget')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{array}{ccc}X & Y & \text{XOR} \\\hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\\end{array}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: Ethics And Course Info
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the [visualization of ICLR papers](https://iclr.cc/virtual/2021/paper_vis.html). Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? --- Submit to Airtable
###Code
# @title Video 18: Submission info
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This is Darryl, the Deep Learning Dapper Lion, and he's here to teach you about content submission to airtable. At the end of each tutorial there will be an Airtable Submission Cell. Run the cell to generate the airtable submission button and click on it to submit your information to airtable.if it is the last tutorial of the day your button will look like this and take you to the end of day survey: otherwise it look like this: It is critical that you push the submit button for every tutorial you run. even if you don't finish the tutorial, still submit!Submitting is the only way we can verify that you attempted each tutorial, which is critical for us to be able to track your progress. TL;DR: Basic tutorial workflow1. work through the tutorial, answering Think! questions and code exercises2. at end each tutorial, (even if tutorial incomplete) run the airtable submission code cell3. Push the submission button4. if the last tutorial of the day, Submission button will also take you to the end of the day survey on a new page. complete that and submit it. Submission FAQs: 1. What if I want to change my answers to previous discussion questions? > you are free to change and resubmit any of the answers and Think! questions as many times as you like. However, please only run the airtable submission code and click on the link once you are ready to submit.2. Okay, but what if I submitted my airtable anyway and reallly want to resubmit?> After making changes, you can re-run the airtable submission cell code cell. This will result in a second submission from you for the data. This will make Darryl sad as it will be more work for him to clean up the data later. 3. HELP! I accidentally ran the code to generate the airtable submission button before I was ready to submit! what do I do?> If you run the code to generate the link, anything that happens afterwards will not be captured. Complete the tutorial and make sure to re-run the airtable submission again when you are finished before pressing the submission button. 4. What if I want to work on this on my own later, should I wait to submit until I'm finished?> Please submit wherever you are at the end of the day. It's graet that you want to keep working on this, but it's important to see the places where we tried things that didn't quite work out, so we can fix them for next year. Finally, we try to keep the airtable code as hidden as possible, but if you ever see any calls to `atform` such as `atform.add_event()` in the coding exercises, just know that is for saving airtable information only. It will not affect the code that is being run around it in any way , so please do not modify, comment out, or worry about any of those lines of code.Now, lets try submitting today's course to Airtable by running the next cell and clicking the button when it appears.
###Code
# @title Airtable Submission Link
from IPython import display
display.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link to survey" style="width:410px"></a>
</div>""" )
###Output
_____no_output_____
###Markdown
--- Bonus - 60 years of Machine Learning Research in one Plotby [Hendrik Strobelt](http://hendrik.strobelt.com) (MIT-IBM Watson AI Lab) with support from Benjamin Hoover.In this notebook we visualize a subset* of 3,300 articles retreived from the AllenAI [S2ORC dataset](https://github.com/allenai/s2orc). We represent each paper by a position that is output of a dimensionality reduction method applied to a vector representation of each paper. The vector representation is output of a neural network.*The selection is very biased on the keywords and methodology we used to filter. Please see the details section to learn about what we did.
###Code
# @title Import `altair` and load the data
!pip install altair vega_datasets --quiet
import requests
import altair as alt # altair is defining data visualizations
# Source data files
# Position data file maps ID to x,y positions
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc.pos_umap_cosine_100_d0.1.json
POS_FILE = 'https://osf.io/qyrfn/download'
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc_clean.csv
# Metadata file maps ID to title, abstract, author,....
META_FILE = 'https://osf.io/vfdu6/download'
# data loading and wrangling
def load_data():
positions = pd.read_json(POS_FILE)
positions[['x', 'y']] = positions['pos'].to_list()
meta = pd.read_csv(META_FILE)
return positions.merge(meta, left_on='id', right_on='paper_id')
# load data
data = load_data()
# @title Define Visualization using ALtair
YEAR_PERIOD = "quinquennial" # @param
selection = alt.selection_multi(fields=[YEAR_PERIOD], bind='legend')
data[YEAR_PERIOD] = (data["year"] / 5.0).apply(np.floor) * 5
chart = alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count"]], width=800,
height=800).mark_circle(radius=2, opacity=0.2).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False, clamp=True, domain=list(range(1955,2020,5))),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
# size='citation_count',
# color="decade:O",
opacity=alt.condition(selection, alt.value(.8), alt.value(0.2)),
).add_selection(
selection
).interactive()
###Output
_____no_output_____
###Markdown
Lets look at the Visualization. Each dot represents one paper. Close dots mean that the respective papers are closer related than distant ones. The color indicates the 5-year period of when the paper was published. The dot size indicates the citation count (within S2ORC corpus) as of July 2020. The view is **interactive** and allows for three main interactions. Try them and play around.1. hover over a dot to see a tooltip (title, author)2. select a year in the legend (right) to filter dots2. zoom in/out with scroll -- double click resets view
###Code
chart
###Output
_____no_output_____
###Markdown
QuestionsBy playing around, can you find some answers to the follwing questions?1. Can you find topical clusters? What cluster might occur because of a filtering error?2. Can you see a temporal trend in the data and clusters?2. Can you determine when deep learning methods started booming ?3. Can you find the key papers that where written before the DL "winter" that define milestones for a cluster? (tipp: look for large dots of different color) MethodsHere is what we did:1. Filtering of all papers who fullfilled the criterria: - are categorized as `Computer Science` or `Mathematics` - one of the following keywords appearing in title or abstract: `"machine learning|artificial intelligence|neural network|(machine|computer) vision|perceptron|network architecture| RNN | CNN | LSTM | BLEU | MNIST | CIFAR |reinforcement learning|gradient descent| Imagenet "`2. per year, remove all papers that are below the 99 percentile of citation count in that year3. embed each paper by using abstract+title in SPECTER model4. project based on embedding using UMAP5. visualize using Altair Find Authors
###Code
# @title Edit the `AUTHOR_FILTER` variable to full text search for authors.
AUTHOR_FILTER = "Rush " # @param space at the end means "word border"
### Don't ignore case when searching...
FLAGS = 0
### uncomment do ignore case
# FLAGS = re.IGNORECASE
## --- FILTER CODE.. make it your own ---
import re
data['issel'] = data['authors'].str.contains(AUTHOR_FILTER, na=False, flags=FLAGS, )
if data['issel'].mean()<0.0000000001:
print('No match found')
## --- FROM HERE ON VIS CODE ---
alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count", "issel"]], width=800,
height=800) \
.mark_circle(stroke="black", strokeOpacity=1).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.StrokeWidth('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[0, 2]), legend=None),
alt.Opacity('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[.2, 1]), legend=None),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
).interactive()
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Deepak Raya, Siwei Bai, Kelson Shilling-Scrivo__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
These are the slides for all videos in this tutorial. If you want to locally dowload the slides, click [here](https://osf.io/wcjrv/download). --- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting. In some tutorials, you will notice that we install some dependencies even if they are preinstalled on google Colab or kaggle. This happens because we have added automation to our repository through [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions).Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D1_T1','https://portal.neuromatchacademy.org/api/redirect/to/97e94a29-0b3a-4e16-9a8d-f6838a5bd83d')
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
# TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20, 21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20, 21)} (shape: {torch.ones(20, 21).shape})")
if not np.array_equal( B.numpy(),np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device='cpu'):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
if device == 'cpu':
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
else:
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Colab users***Scratch Code Cells*If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course*Time estimate: ~25mins*
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing
atform.add_event('Video 1: Welcome and History')
display(out)
###Output
_____no_output_____
###Markdown
This will be an intensive 3 week adventure. We will all learn Deep Learning (DL) in a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 2: Why DL is cool')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch*Time estimate: ~2 hours 05 mins* PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the Appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 3: Making Tensors')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print(f"Tensor x: {x}")
print(f"Tensor y: {y}")
print(f"Tensor z: {z}")
###Output
_____no_output_____
###Markdown
Notice that `.empty()` does not return zeros, but seemingly random small numbers. Unlike `.zeros()`, which initialises the elements of the tensor with zeros, `.empty()` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
print(f"Tensor d: {d}")
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and GPU):```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries (e.g., NumPy):```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `True`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the cell multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim \mathcal{U}(0,1)^\dagger$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.$^\dagger$: $\mathcal{U(\alpha, \beta)}$ denotes the [uniform distribution](https://en.wikipedia.org/wiki/Continuous_uniform_distribution) from $\alpha$ to $\beta$, with $\alpha, \beta \in \mathbb{R}$.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# add timing to airtable
atform.add_event('Coding Exercise 2.1: Creating Tensors')
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ad4f6c0f.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under `torch.`.
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 4: Tensor Operators')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under `torch.`.
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators ($+$, $-$, $*$, $/$, and $**$) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The `**` is the exponentiation operator
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents (feel free to skip if you already know this!).
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The `@` symbol is overridden to represent matrix multiplication. You can also use `torch.matmul()` to multiply tensors. For dot multiplication, you can use `torch.dot()`, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using `torch.t()` or `Tensor.T`. Note the lack of brackets for `Tensor.T` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-simple_operations')
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_5562ea1d.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-dot_product')
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_00491ea4.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 5: Tensor Indexing')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, `[-1]` selects the last element; `[1:3]` selects the second and the third elements, and `[:-2]` will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as NumPy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the `.flatten()` and `.reshape()` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the `.view()` methods used a lot to reshape tensors. There is a subtle difference between `.view()` and `.reshape()`, though for now we will just use `.reshape()`. The documentation can be found in the Appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. E.g., `[1,10]` or `[256, 1, 3]`. This dimension can quite easily mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the `.squeeze()` method. We can use the `.unsqueeze()` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, `x[0]` gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim $[3\times48\times64]$, but our pipeline expects the colour dimension to be the last dimension, i.e., $[48\times64\times3]$. To get around this we can use the `.permute()` method.
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see `.transpose()` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length (`6`) is the sum of the two input tensors’ axis-0 lengths (`3+3`); while the second output tensor’s axis-1 length (`8`) is the sum of the two input tensors’ axis-1 lengths (`4+4`).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy, and the converted result does not share memory. This minor inconvenience is quite important: when you perform operations on the CPU or GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a NumPy array, the information being tracked by the tensor will be lost, i.e., the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$, i.e., a scalar, e.g.,:$ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix} \,$and$ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.,:$ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** Pay close attention to singleton dimensions.**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.,:$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix} \,$and $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix} \, $so$ \, Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$and$ \, E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix} \,$so$ \, Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** `torch.numel()` is an easy way of finding the number of elements in a tensor.
###Code
def functionA(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`
and returns the column sum of
`my_tensor1` multiplied by the sum of all the elmements of `my_tensor2`,
i.e., a scalar.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `my_tensor1` by the sum of
`my_tensor2`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(my_tensor):
"""
This function takes in a square matrix `my_tensor` and returns a 2D tensor
consisting of a flattened `my_tensor` with the index of each element
appended to this tensor in the row dimension.
Args:
my_tensor: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor `my_tensor`
my_tensor = ...
# TODO create the idx tensor to be concatenated to `my_tensor`
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`.
If the dimensions allow it, it returns the
elementwise sum of `my_tensor1`-shaped `my_tensor2`, and `my_tensor2`;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape `my_tensor2` into the shape of `my_tensor1`
if ...:
# TODO reshape `my_tensor2` into the shape of `my_tensor1`
my_tensor2 = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
my_tensor1 = ...
my_tensor2 = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
# add timing to airtable
atform.add_event('Coding Exercise 2.3: Manipulating Tensors')
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ea1718cb.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([ 1, -1, -1, 3, 2, 3, 0])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 6: GPU vs CPU')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following *Runtime* → *Change runtime type* and selecting **GPU** from the *Hardware Accelerator* dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.For more information on the GPU usage policy you can view in the Appendix. **Now we have a GPU.** The cell below should return `True`.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
[CUDA](https://developer.nvidia.com/cuda-toolkit) is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python.In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Here, we define the function `set_device`, which returns the device use in the notebook, i.e., `cpu` or `cuda`. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as```pythonDEVICE = set_device()```Let's define the function using the PyTorch package `torch.cuda`, which is lazily initialized, so we can always import it, and use `is_available()` to determine if our system supports CUDA.
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
###Output
_____no_output_____
###Markdown
Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling `.to()`. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine CUDA tensors and CPU tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the `.to()` method as before, or the `.cpu()` and `.cuda()` methods. Note that using the `.cuda()` will throw an error, if CUDA is not enabled in your machine.Generally, in this course, all Deep Learning is done on the GPU, and any computation is done on the CPU, so sometimes we have to pass things back and forth, so you'll see us call.
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function `simpleFun`. Complete this function, such that it performs the operations:- elementwise multiplication- matrix multiplicationThe operations should be able to perfomed on either the CPU or GPU specified by the parameter `device`. We will use the helper function `timeFun(f, dim, iterations, device)`.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the function, but
## ensure all computations happens on the `device`
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
x = ...
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
y = ...
# 2D tensor filled with the scalar value 2, dim x dim
z = ...
# elementwise multiplication of x and y
a = ...
# matrix multiplication of x and y
b = ...
del x
del y
del z
del a
del b
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(f=simpleFun, dim=dim, iterations=iterations)
# timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_232a94a4.py) Sample output (depends on your hardware)```time taken for 1 iterations of simpleFun(10000, cpu): 23.74070time taken for 1 iterations of simpleFun(10000, cuda): 0.87535``` **Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 7: Getting Data')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples: {len(cifar10_data)}")
print(f"Class names: {cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print(f"Label: {cifar10_data.classes[label]}")
print(f"Image size: {image.shape}")
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
# plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_b04bd357.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 8: Train and Test')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 9: Data Augmentation - Transformations')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` **Important:** For the `seed_worker` to have an effect, `num_workers` should be 2 or more. We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: load the CIFAR10 data,
## but as grayscale images and not as RGB colored.
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
plt.show()
return data
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# data = my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_6052d728.py)*Example output:* --- Section 3: Neural Networks*Time estimate: ~1 hour 30 mins (excluding video)* Now it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 10: CSV Files')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in various formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
plt.show()
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 11: Generating the Neural Network')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:* `__init__` In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.* `forward` All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.* `predict` This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.* `train` This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.**Note:** You can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it.
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:\n", X_samples)
## Do a forward pass of the network
# output = ...
# print("\nNetwork output:\n", output)
## Predict the label of each point
# y_predicted = ...
# print("\nPredicted labels:\n", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_af8ae0ff.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[ 0.1543, -0.8018], [ 2.2077, -2.9859], [-0.5745, -0.0195], [ 0.1924, -0.8367], [ 0.1818, -0.8301]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 1, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 12: Train the Network')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we will cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 13: Play with it')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 14: XOR Widget')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{matrix} X & Y & \text{XOR}\\ \hline 0 & 0 & 0\\ 0 & 1 & 1\\ 1 & 0 & 1\\ 1 & 1 & 0\end{matrix}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown Play with the parameters to solve XOR
from IPython.display import IFrame
IFrame("https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false", width=1020, height=660)
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: Ethics And Course Info
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/) Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/) Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/) Now, go to the [visualization of ICLR papers](https://iclr.cc/virtual/2021/paper_vis.html). Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? --- Submit to Airtable
###Code
# @title Video 18: Submission info
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This is Darryl, the Deep Learning Dapper Lion, and he's here to teach you about content submission to airtable. At the end of each tutorial there will be an Airtable Submission Cell. Run the cell to generate the airtable submission button and click on it to submit your information to airtable.if it is the last tutorial of the day your button will look like this and take you to the end of day survey: otherwise it look like this: It is critical that you push the submit button for every tutorial you run. **even if you don't finish the tutorial, still submit!**Submitting is the only way we can verify that you attempted each tutorial, which is critical for us to be able to track your progress. TL;DR: Basic tutorial workflow1. Work through the tutorial, answering **Think!** questions and **Coding Exercises**.2. At end each tutorial, (even if tutorial incomplete) run the airtable submission code cell.3. Push the *Submission* button.4. If the last tutorial of the day, *Submission* button will also take you to the end of the day survey on a new page. complete that and submit it. Submission FAQs: 1. What if I want to change my answers to previous discussion questions? > you are free to change and resubmit any of the answers and Think! questions as many times as you like. However, **please only run the airtable submission code and click on the link once you are ready to submit**.2. Okay, but what if I submitted my airtable anyway and reallly want to resubmit?> After making changes, you can re-run the airtable submission cell code cell. This will result in a second submission from you for the data. This will make Darryl sad as it will be more work for him to clean up the data later. 3. HELP! I accidentally ran the code to generate the airtable submission button before I was ready to submit! what do I do?> If you run the code to generate the link, anything that happens afterwards will not be captured. Complete the tutorial and make sure to re-run the airtable submission again when you are finished before pressing the submission button. 4. What if I want to work on this on my own later, should I wait to submit until I'm finished?> Please submit wherever you are at the end of the day. It's graet that you want to keep working on this, but it's important to see the places where we tried things that didn't quite work out, so we can fix them for next year. Finally, we try to keep the airtable code as hidden as possible, but if you ever see any calls to `atform` such as `atform.add_event()` in the coding exercises, just know that is for saving airtable information only. **It will not affect the code that is being run around it in any way**, so please do not modify, comment out, or worry about any of those lines of code.Now, lets try submitting today's course to Airtable by running the next cell and clicking the button when it appears.
###Code
# @title Airtable Submission Link
from IPython import display as IPyDisplay
IPyDisplay.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link to survey" style="width:410px"></a>
</div>""" )
###Output
_____no_output_____
###Markdown
--- Bonus - 60 years of Machine Learning Research in one PlotBy [Hendrik Strobelt](http://hendrik.strobelt.com) (MIT-IBM Watson AI Lab) with support from Benjamin Hoover.In this notebook we visualize a subset* of 3,300 articles retreived from the AllenAI [S2ORC dataset](https://github.com/allenai/s2orc). We represent each paper by a position that is output of a dimensionality reduction method applied to a vector representation of each paper. The vector representation is output of a neural network.**Note:** The selection is very biased on the keywords and methodology we used to filter. Please see the details section to learn about what we did.
###Code
# @title Import `altair` and load the data
!pip install altair vega_datasets --quiet
import requests
import altair as alt # altair is defining data visualizations
# Source data files
# Position data file maps ID to x,y positions
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc.pos_umap_cosine_100_d0.1.json
POS_FILE = 'https://osf.io/qyrfn/download'
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc_clean.csv
# Metadata file maps ID to title, abstract, author,....
META_FILE = 'https://osf.io/vfdu6/download'
# data loading and wrangling
def load_data():
positions = pd.read_json(POS_FILE)
positions[['x', 'y']] = positions['pos'].to_list()
meta = pd.read_csv(META_FILE)
return positions.merge(meta, left_on='id', right_on='paper_id')
# load data
data = load_data()
# @title Define Visualization using ALtair
YEAR_PERIOD = "quinquennial" # @param
selection = alt.selection_multi(fields=[YEAR_PERIOD], bind='legend')
data[YEAR_PERIOD] = (data["year"] / 5.0).apply(np.floor) * 5
chart = alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count"]], width=800,
height=800).mark_circle(radius=2, opacity=0.2).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False, clamp=True, domain=list(range(1955,2020,5))),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
# size='citation_count',
# color="decade:O",
opacity=alt.condition(selection, alt.value(.8), alt.value(0.2)),
).add_selection(
selection
).interactive()
###Output
_____no_output_____
###Markdown
Lets look at the Visualization. Each dot represents one paper. Close dots mean that the respective papers are closer related than distant ones. The color indicates the 5-year period of when the paper was published. The dot size indicates the citation count (within S2ORC corpus) as of July 2020. The view is **interactive** and allows for three main interactions. Try them and play around:1. Hover over a dot to see a tooltip (title, author)2. Select a year in the legend (right) to filter dots3. Zoom in/out with scroll -- double click resets view
###Code
chart
###Output
_____no_output_____
###Markdown
QuestionsBy playing around, can you find some answers to the follwing questions?1. Can you find topical clusters? What cluster might occur because of a filtering error?2. Can you see a temporal trend in the data and clusters?3. Can you determine when deep learning methods started booming ?4. Can you find the key papers that where written before the DL "winter" that define milestones for a cluster? (tipp: look for large dots of different color) MethodsHere is what we did:1. Filtering of all papers who fullfilled the criterria: - are categorized as `Computer Science` or `Mathematics` - one of the following keywords appearing in title or abstract: `"machine learning|artificial intelligence|neural network|(machine|computer) vision|perceptron|network architecture| RNN | CNN | LSTM | BLEU | MNIST | CIFAR |reinforcement learning|gradient descent| Imagenet "`2. per year, remove all papers that are below the 99 percentile of citation count in that year3. Embed each paper by using abstract+title in SPECTER model4. Project based on embedding using UMAP5. Visualize using Altair Find Authors
###Code
# @title Edit the `AUTHOR_FILTER` variable to full text search for authors.
AUTHOR_FILTER = "Rush " # @param space at the end means "word border"
### Don't ignore case when searching...
FLAGS = 0
### uncomment do ignore case
# FLAGS = re.IGNORECASE
## --- FILTER CODE.. make it your own ---
import re
data['issel'] = data['authors'].str.contains(AUTHOR_FILTER, na=False, flags=FLAGS, )
if data['issel'].mean()<0.0000000001:
print('No match found')
## --- FROM HERE ON VIS CODE ---
alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count", "issel"]], width=800,
height=800) \
.mark_circle(stroke="black", strokeOpacity=1).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.StrokeWidth('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[0, 2]), legend=None),
alt.Opacity('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[.2, 1]), legend=None),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
).interactive()
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print(f"Tensor x: {x}")
print(f"Tensor y: {y}")
print(f"Tensor z: {z}")
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
print(f"Tensor d: {d}")
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `True`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the cell multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# add timing to airtable
atform.add_event('Coding Exercise 2.1: Creating Tensors')
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ad4f6c0f.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 4: Tensor Operators')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.T```. Note the lack of brackets for ```Tensor.T``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-simple_operations')
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_5562ea1d.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-dot_product')
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_00491ea4.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 5: Tensor Indexing')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easily mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see ```.transpose()``` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar, e.g.,:$ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix} \,$and$ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.,:$ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.,:$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix} \,$and $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix} \, $so$ \, Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$and$ \, E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix} \,$so$ \, Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** `torch.numel()` is an easy way of finding the number of elements in a tensor
###Code
def functionA(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`
and returns the column sum of
`my_tensor1` multiplied by the sum of all the elmements of `my_tensor2`,
i.e., a scalar.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `my_tensor1` by the sum of
`my_tensor2`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(my_tensor):
"""
This function takes in a square matrix `my_tensor` and returns a 2D tensor
consisting of a flattened `my_tensor` with the index of each element
appended to this tensor in the row dimension.
Args:
my_tensor: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor `my_tensor`
my_tensor = ...
# TODO create the idx tensor to be concatenated to `my_tensor`
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`.
If the dimensions allow it, it returns the
elementwise sum of `my_tensor1`-shaped `my_tensor2`, and `my_tensor2`;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape `my_tensor2` into the shape of `my_tensor1`
if ...:
# TODO reshape `my_tensor2` into the shape of `my_tensor1`
my_tensor2 = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
my_tensor1 = ...
my_tensor2 = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
# add timing to airtable
atform.add_event('Coding Exercise 2.3: Manipulating Tensors')
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ea1718cb.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([ 1, -1, -1, 3, 2, 3, 0])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 6: GPU vs CPU')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Here, we define the function `set_device`, which returns the device use in the notebook, i.e., `cpu` or `cuda`. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as```pythonDEVICE = set_device()```Let's define the function using the PyTorch package `torch.cuda`, which is lazily initialized, so we can always import it, and use `is_available()` to determine if our system supports CUDA.
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
###Output
_____no_output_____
###Markdown
Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the `.to()` method as before, or the `.cpu()` and `.cuda()` methods. Note that using the `.cuda()` will throw an error if CUDA is not enabled in your machine.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call.
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function `simpleFun`. Complete this function, such that it performs the operations:- elementwise multiplication- matrix multiplicationThe operations should be able to perfomed on either the CPU or GPU specified by the parameter `device`. We will use the helper function `timeFun(f, dim, iterations, device)`.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the function, but
## ensure all computations happens on the `device`
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
x = ...
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
y = ...
# 2D tensor filled with the scalar value 2, dim x dim
z = ...
# elementwise multiplication of x and y
a = ...
# matrix multiplication of x and y
b = ...
del x
del y
del z
del a
del b
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(f=simpleFun, dim=dim, iterations=iterations)
# timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_232a94a4.py) Sample output (depends on your hardware)```time taken for 1 iterations of simpleFun(10000, cpu): 23.74070time taken for 1 iterations of simpleFun(10000, cuda): 0.87535``` **Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 7: Getting Data')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples: {len(cifar10_data)}")
print(f"Class names: {cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print(f"Label: {cifar10_data.classes[label]}")
print(f"Image size: {image.shape}")
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
# plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_b04bd357.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 8: Train and Test')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 9: Data Augmentation - Transformations')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` **Note:** For the `seed_worker` to have an effect, `num_workers` should be 2 or more. We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: load the CIFAR10 data,
## but as grayscale images and not as RGB colored.
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
plt.show()
return data
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# data = my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_6052d728.py)*Example output:* --- Section 3: Neural Networks*Time estimate: ~1 hour 30 mins (excluding movie)* Now it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 10: CSV Files')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in various formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
plt.show()
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 11: Generating the Neural Network')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:`__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.`forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.`predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.`train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:\n", X_samples)
## Do a forward pass of the network
# output = ...
# print("\nNetwork output:\n", output)
## Predict the label of each point
# y_predicted = ...
# print("\nPredicted labels:\n", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_af8ae0ff.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[ 0.1543, -0.8018], [ 2.2077, -2.9859], [-0.5745, -0.0195], [ 0.1924, -0.8367], [ 0.1818, -0.8301]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 1, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 12: Train the Network')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we wil cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 13: Play with it')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 14: XOR Widget')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{array}{ccc}X & Y & \text{XOR} \\\hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\\end{array}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: Ethics And Course Info
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the [visualization of ICLR papers](https://iclr.cc/virtual/2021/paper_vis.html). Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? --- Submit to Airtable
###Code
# @title Video 18: Submission info
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This is Darryl, the Deep Learning Dapper Lion, and he's here to teach you about content submission to airtable. At the end of each tutorial there will be an Airtable Submission Cell. Run the cell to generate the airtable submission button and click on it to submit your information to airtable.if it is the last tutorial of the day your button will look like this and take you to the end of day survey: otherwise it look like this: It is critical that you push the submit button for every tutorial you run. even if you don't finish the tutorial, still submit!Submitting is the only way we can verify that you attempted each tutorial, which is critical for us to be able to track your progress. TL;DR: Basic tutorial workflow1. work through the tutorial, answering Think! questions and code exercises2. at end each tutorial, (even if tutorial incomplete) run the airtable submission code cell3. Push the submission button4. if the last tutorial of the day, Submission button will also take you to the end of the day survey on a new page. complete that and submit it. Submission FAQs: 1. What if I want to change my answers to previous discussion questions? > you are free to change and resubmit any of the answers and Think! questions as many times as you like. However, please only run the airtable submission code and click on the link once you are ready to submit.2. Okay, but what if I submitted my airtable anyway and reallly want to resubmit?> After making changes, you can re-run the airtable submission cell code cell. This will result in a second submission from you for the data. This will make Darryl sad as it will be more work for him to clean up the data later. 3. HELP! I accidentally ran the code to generate the airtable submission button before I was ready to submit! what do I do?> If you run the code to generate the link, anything that happens afterwards will not be captured. Complete the tutorial and make sure to re-run the airtable submission again when you are finished before pressing the submission button. 4. What if I want to work on this on my own later, should I wait to submit until I'm finished?> Please submit wherever you are at the end of the day. It's graet that you want to keep working on this, but it's important to see the places where we tried things that didn't quite work out, so we can fix them for next year. Finally, we try to keep the airtable code as hidden as possible, but if you ever see any calls to `atform` such as `atform.add_event()` in the coding exercises, just know that is for saving airtable information only. It will not affect the code that is being run around it in any way , so please do not modify, comment out, or worry about any of those lines of code.Now, lets try submitting today's course to Airtable by running the next cell and clicking the button when it appears.
###Code
# @title Airtable Submission Link
from IPython import display
display.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link to survey" style="width:410px"></a>
</div>""" )
###Output
_____no_output_____
###Markdown
--- Bonus - 60 years of Machine Learning Research in one Plotby [Hendrik Strobelt](http://hendrik.strobelt.com) (MIT-IBM Watson AI Lab) with support from Benjamin Hoover.In this notebook we visualize a subset* of 3,300 articles retreived from the AllenAI [S2ORC dataset](https://github.com/allenai/s2orc). We represent each paper by a position that is output of a dimensionality reduction method applied to a vector representation of each paper. The vector representation is output of a neural network.*The selection is very biased on the keywords and methodology we used to filter. Please see the details section to learn about what we did.
###Code
# @title Import `altair` and load the data
!pip install altair vega_datasets --quiet
import requests
import altair as alt # altair is defining data visualizations
# Source data files
# Position data file maps ID to x,y positions
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc.pos_umap_cosine_100_d0.1.json
POS_FILE = 'https://osf.io/qyrfn/download'
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc_clean.csv
# Metadata file maps ID to title, abstract, author,....
META_FILE = 'https://osf.io/vfdu6/download'
# data loading and wrangling
def load_data():
positions = pd.read_json(POS_FILE)
positions[['x', 'y']] = positions['pos'].to_list()
meta = pd.read_csv(META_FILE)
return positions.merge(meta, left_on='id', right_on='paper_id')
# load data
data = load_data()
# @title Define Visualization using ALtair
YEAR_PERIOD = "quinquennial" # @param
selection = alt.selection_multi(fields=[YEAR_PERIOD], bind='legend')
data[YEAR_PERIOD] = (data["year"] / 5.0).apply(np.floor) * 5
chart = alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count"]], width=800,
height=800).mark_circle(radius=2, opacity=0.2).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False, clamp=True, domain=list(range(1955,2020,5))),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
# size='citation_count',
# color="decade:O",
opacity=alt.condition(selection, alt.value(.8), alt.value(0.2)),
).add_selection(
selection
).interactive()
###Output
_____no_output_____
###Markdown
Lets look at the Visualization. Each dot represents one paper. Close dots mean that the respective papers are closer related than distant ones. The color indicates the 5-year period of when the paper was published. The dot size indicates the citation count (within S2ORC corpus) as of July 2020. The view is **interactive** and allows for three main interactions. Try them and play around.1. hover over a dot to see a tooltip (title, author)2. select a year in the legend (right) to filter dots2. zoom in/out with scroll -- double click resets view
###Code
chart
###Output
_____no_output_____
###Markdown
QuestionsBy playing around, can you find some answers to the follwing questions?1. Can you find topical clusters? What cluster might occur because of a filtering error?2. Can you see a temporal trend in the data and clusters?2. Can you determine when deep learning methods started booming ?3. Can you find the key papers that where written before the DL "winter" that define milestones for a cluster? (tipp: look for large dots of different color) MethodsHere is what we did:1. Filtering of all papers who fullfilled the criterria: - are categorized as `Computer Science` or `Mathematics` - one of the following keywords appearing in title or abstract: `"machine learning|artificial intelligence|neural network|(machine|computer) vision|perceptron|network architecture| RNN | CNN | LSTM | BLEU | MNIST | CIFAR |reinforcement learning|gradient descent| Imagenet "`2. per year, remove all papers that are below the 99 percentile of citation count in that year3. embed each paper by using abstract+title in SPECTER model4. project based on embedding using UMAP5. visualize using Altair Find Authors
###Code
# @title Edit the `AUTHOR_FILTER` variable to full text search for authors.
AUTHOR_FILTER = "Rush " # @param space at the end means "word border"
### Don't ignore case when searching...
FLAGS = 0
### uncomment do ignore case
# FLAGS = re.IGNORECASE
## --- FILTER CODE.. make it your own ---
import re
data['issel'] = data['authors'].str.contains(AUTHOR_FILTER, na=False, flags=FLAGS, )
if data['issel'].mean()<0.0000000001:
print('No match found')
## --- FROM HERE ON VIS CODE ---
alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count", "issel"]], width=800,
height=800) \
.mark_circle(stroke="black", strokeOpacity=1).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.StrokeWidth('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[0, 2]), legend=None),
alt.Opacity('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[.2, 1]), legend=None),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
).interactive()
###Output
_____no_output_____
###Markdown
Tutorial 1: PyTorch**Week 1, Day 1: Basics and PyTorch****By Neuromatch Academy**__Content creators:__ Shubh Pachchigar, Vladimir Haltakov, Matthew Sargent, Konrad Kording__Content reviewers:__ Deepak Raya, Siwei Bai, Kelson Shilling-Scrivo__Content editors:__ Anoop Kulkarni, Spiros Chavlis__Production editors:__ Arush Tagade, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThen have a few specific objectives for this tutorial:* Learn about PyTorch and tensors* Tensor Manipulations* Data Loading* GPUs and Cuda Tensors* Train NaiveNet* Get to know your pod* Start thinking about the course as a whole
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wcjrv/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup Throughout your Neuromatch tutorials, most (probably all!) notebooks contain setup cells. These cells will import the required Python packages (e.g., PyTorch, NumPy); set global or environment variables, and load in helper functions for things like plotting. In some tutorials, you will notice that we install some dependencies even if they are preinstalled on google colab or kaggle. This happens because we have added automation to our repository through [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions).Be sure to run all of the cells in the setup section. Feel free to expand them and have a look at what you are loading in, but you should be able to fulfill the learning objectives of every tutorial without having to look at these cells.If you start building your own projects built on this code base we highly recommend looking at them in more detail.
###Code
# @title Install dependencies
!pip install pandas --quiet
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# Imports
import time
import torch
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
# @title Figure Settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Helper Functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D1_T1','https://portal.neuromatchacademy.org/api/redirect/to/97e94a29-0b3a-4e16-9a8d-f6838a5bd83d')
def checkExercise1(A, B, C, D):
"""
Helper function for checking exercise.
Args:
A: torch.Tensor
B: torch.Tensor
C: torch.Tensor
D: torch.Tensor
Returns:
Nothing.
"""
errors = []
# TODO better errors and error handling
if not torch.equal(A.to(int),torch.ones(20, 21).to(int)):
errors.append(f"Got: {A} \n Expected: {torch.ones(20, 21)} (shape: {torch.ones(20, 21).shape})")
if not np.array_equal( B.numpy(),np.vander([1, 2, 3], 4)):
errors.append("B is not a tensor containing the elements of Z ")
if C.shape != (20, 21):
errors.append("C is not the correct shape ")
if not torch.equal(D, torch.arange(4, 41, step=2)):
errors.append("D does not contain the correct elements")
if errors == []:
print("All correct!")
else:
[print(e) for e in errors]
def timeFun(f, dim, iterations, device='cpu'):
iterations = iterations
t_total = 0
for _ in range(iterations):
start = time.time()
f(dim, device)
end = time.time()
t_total += end - start
if device == 'cpu':
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
else:
print(f"time taken for {iterations} iterations of {f.__name__}({dim}, {device}): {t_total:.5f}")
###Output
_____no_output_____
###Markdown
**Important note: Google Colab users***Scratch Code Cells*If you want to quickly try out something or take a look at the data you can use scratch code cells. They allow you to run Python code, but will not mess up the structure of your notebook.To open a new scratch cell go to *Insert* → *Scratch code cell*. Section 1: Welcome to Neuromatch Deep learning course*Time estimate: ~25mins*
###Code
# @title Video 1: Welcome and History
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Av411n7oL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ca21SNqt78I", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing
atform.add_event('Video 1: Welcome and History')
display(out)
###Output
_____no_output_____
###Markdown
*This will be an intensive 3 week adventure. We will all learn Deep Learning. In a group. Groups need standards. Read our [Code of Conduct](https://docs.google.com/document/d/1eHKIkaNbAlbx_92tLQelXnicKXEcvFzlyzzeWjEtifM/edit?usp=sharing).
###Code
# @title Video 2: Why DL is cool
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1gf4y1j7UZ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"l-K6495BN-4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 2: Why DL is cool')
display(out)
###Output
_____no_output_____
###Markdown
**Describe what you hope to get out of this course in about 100 words.** --- Section 2: The Basics of PyTorch*Time estimate: ~2 hours 05 mins* PyTorch is a Python-based scientific computing package targeted at two sets ofaudiences:- A replacement for NumPy to use the power of GPUs- A deep learning platform that provides significant flexibility and speedAt its core, PyTorch provides a few key features:- A multidimensional [Tensor](https://pytorch.org/docs/stable/tensors.html) object, similar to [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) but with GPU acceleration.- An optimized **autograd** engine for automatically computing derivatives.- A clean, modular API for building and deploying **deep learning models**.You can find more information about PyTorch in the appendix. Section 2.1: Creating Tensors
###Code
# @title Video 3: Making Tensors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rw411d7Uy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jGKd_4tPGrw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 3: Making Tensors')
display(out)
###Output
_____no_output_____
###Markdown
There are various ways of creating tensors, and when doing any real deep learning project we will usually have to do so. **Construct tensors directly:**---
###Code
# we can construct a tensor directly from some common python iterables,
# such as list and tuple nested iterables can also be handled as long as the
# dimensions make sense
# tensor from a list
a = torch.tensor([0, 1, 2])
#tensor from a tuple of tuples
b = ((1.0, 1.1), (1.2, 1.3))
b = torch.tensor(b)
# tensor from a numpy array
c = np.ones([2, 3])
c = torch.tensor(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
###Output
_____no_output_____
###Markdown
**Some common tensor constructors:**---
###Code
# the numerical arguments we pass to these constructors
# determine the shape of the output tensor
x = torch.ones(5, 3)
y = torch.zeros(2)
z = torch.empty(1, 1, 5)
print(f"Tensor x: {x}")
print(f"Tensor y: {y}")
print(f"Tensor z: {z}")
###Output
_____no_output_____
###Markdown
Notice that ```.empty()``` does not return zeros, but seemingly random small numbers. Unlike ```.zeros()```, which initialises the elements of the tensor with zeros, ```.empty()``` just allocates the memory. It is hence a bit faster if you are looking to just create a tensor. **Creating random tensors and tensors like other tensors:**---
###Code
# there are also constructors for random numbers
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
# there are also constructors that allow us to construct
# a tensor according to the above constructors, but with
# dimensions equal to another tensor
c = torch.zeros_like(a)
d = torch.rand_like(c)
print(f"Tensor a: {a}")
print(f"Tensor b: {b}")
print(f"Tensor c: {c}")
print(f"Tensor d: {d}")
###Output
_____no_output_____
###Markdown
*Reproducibility*: - PyTorch random number generator: You can use `torch.manual_seed()` to seed the RNG for all devices (both CPU and CUDA)```pythonimport torchtorch.manual_seed(0)```- For custom operators, you might need to set python seed as well:```pythonimport randomrandom.seed(0)```- Random number generators in other libraries```pythonimport numpy as npnp.random.seed(0)``` Here, we define for you a function called `set_seed` that does the job for you!
###Code
def set_seed(seed=None, seed_torch=True):
"""
Function that controls randomness. NumPy and random modules must be imported.
Args:
seed : Integer
A non-negative integer that defines the random state. Default is `None`.
seed_torch : Boolean
If `True` sets the random seed for pytorch tensors, so pytorch module
must be imported. Default is `True`.
Returns:
Nothing.
"""
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
###Output
_____no_output_____
###Markdown
Now, let's use the `set_seed` function in the previous example. Execute the cell multiple times to verify that the numbers printed are always the same.
###Code
def simplefun(seed=True, my_seed=None):
if seed:
set_seed(seed=my_seed)
# uniform distribution
a = torch.rand(1, 3)
# normal distribution
b = torch.randn(3, 4)
print("Tensor a: ", a)
print("Tensor b: ", b)
simplefun(seed=True, my_seed=0) # Turn `seed` to `False` or change `my_seed`
###Output
_____no_output_____
###Markdown
**Numpy-like number ranges:**---The ```.arange()``` and ```.linspace()``` behave how you would expect them to if you are familar with numpy.
###Code
a = torch.arange(0, 10, step=1)
b = np.arange(0, 10, step=1)
c = torch.linspace(0, 5, steps=11)
d = np.linspace(0, 5, num=11)
print(f"Tensor a: {a}\n")
print(f"Numpy array b: {b}\n")
print(f"Tensor c: {c}\n")
print(f"Numpy array d: {d}\n")
###Output
_____no_output_____
###Markdown
Coding Exercise 2.1: Creating TensorsBelow you will find some incomplete code. Fill in the missing code to construct the specified tensors.We want the tensors: $A:$ 20 by 21 tensor consisting of ones$B:$ a tensor with elements equal to the elements of numpy array $Z$$C:$ a tensor with the same number of elements as $A$ but with values $\sim U(0,1)$$D:$ a 1D tensor containing the even numbers between 4 and 40 inclusive.
###Code
def tensor_creation(Z):
"""A function that creates various tensors.
Args:
Z (numpy.ndarray): An array of shape
Returns:
A : 20 by 21 tensor consisting of ones
B : a tensor with elements equal to the elements of numpy array Z
C : a tensor with the same number of elements as A but with values ∼U(0,1)
D : a 1D tensor containing the even numbers between 4 and 40 inclusive.
"""
#################################################
## TODO for students: fill in the missing code
## from the first expression
raise NotImplementedError("Student exercise: say what they should have done")
#################################################
A = ...
B = ...
C = ...
D = ...
return A, B, C, D
# add timing to airtable
atform.add_event('Coding Exercise 2.1: Creating Tensors')
# numpy array to copy later
Z = np.vander([1, 2, 3], 4)
# Uncomment below to check your function!
# A, B, C, D = tensor_creation(Z)
# checkExercise1(A, B, C, D)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ad4f6c0f.py) ```All correct!``` Section 2.2: Operations in PyTorch**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
# @title Video 4: Tensor Operators
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1G44y127As", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"R1R8VoYXBVA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 4: Tensor Operators')
display(out)
###Output
_____no_output_____
###Markdown
**Tensor-Tensor operations**We can perform operations on tensors using methods under ```torch.```
###Code
a = torch.ones(5, 3)
b = torch.rand(5, 3)
c = torch.empty(5, 3)
d = torch.empty(5, 3)
# this only works if c and d already exist
torch.add(a, b, out=c)
#Pointwise Multiplication of a and b
torch.multiply(a, b, out=d)
print(c)
print(d)
###Output
_____no_output_____
###Markdown
However, in PyTorch most common Python operators are overridden.The common standard arithmetic operators (+, -, *, /, and **) have all been lifted to elementwise operations
###Code
x = torch.tensor([1, 2, 4, 8])
y = torch.tensor([1, 2, 3, 4])
x + y, x - y, x * y, x / y, x**y # The ** operator is exponentiation
###Output
_____no_output_____
###Markdown
**Tensor Methods** Tensors also have a number of common arithmetic operations built in. A full list of **all** methods can be found in the appendix (there are a lot!) All of these operations should have similar syntax to their numpy equivalents.(Feel free to skip if you already know this!)
###Code
x = torch.rand(3, 3)
print(x)
print("\n")
# sum() - note the axis is the axis you move across when summing
print(f"Sum of every element of x: {x.sum()}")
print(f"Sum of the columns of x: {x.sum(axis=0)}")
print(f"Sum of the rows of x: {x.sum(axis=1)}")
print("\n")
print(f"Mean value of all elements of x {x.mean()}")
print(f"Mean values of the columns of x {x.mean(axis=0)}")
print(f"Mean values of the rows of x {x.mean(axis=1)}")
###Output
_____no_output_____
###Markdown
**Matrix Operations**The ```@``` symbol is overridden to represent matrix multiplication. You can also use ```torch.matmul()``` to multiply tensors. For dot multiplication, you can use ```torch.dot()```, or manipulate the axes of your tensors and do matrix multiplication (we will cover that in the next section). Transposes of 2D tensors are obtained using ```torch.t()``` or ```Tensor.t```. Note the lack of brackets for ```Tensor.t``` - it is an attribute, not a method. Coding Exercise 2.2 : Simple tensor operationsBelow are two expressions involving operations on matrices. $$ \textbf{A} = \begin{bmatrix}2 &4 \\5 & 7 \end{bmatrix} \begin{bmatrix} 1 &1 \\2 & 3\end{bmatrix} + \begin{bmatrix}10 & 10 \\ 12 & 1 \end{bmatrix} $$and$$ b = \begin{bmatrix} 3 \\ 5 \\ 7\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 4 \\ 8\end{bmatrix}$$The code block below that computes these expressions using PyTorch is incomplete - fill in the missing lines.
###Code
def simple_operations(a1: torch.Tensor, a2: torch.Tensor, a3: torch.Tensor):
################################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
################################################
# multiplication of tensor a1 with tensor a2 and then add it with tensor a3
answer = ...
return answer
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-simple_operations')
# Computing expression 1:
# init our tensors
a1 = torch.tensor([[2, 4], [5, 7]])
a2 = torch.tensor([[1, 1], [2, 3]])
a3 = torch.tensor([[10, 10], [12, 1]])
## uncomment to test your function
# A = simple_operations(a1, a2, a3)
# print(A)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_5562ea1d.py) ```tensor([[20, 24], [31, 27]])```
###Code
def dot_product(b1: torch.Tensor, b2: torch.Tensor):
###############################################
## TODO for students: complete the first computation using the argument matricies
raise NotImplementedError("Student exercise: fill in the missing code to complete the operation")
###############################################
# Use torch.dot() to compute the dot product of two tensors
product = ...
return product
# add timing to airtable
atform.add_event('Coding Exercise 2.2 : Simple tensor operations-dot_product')
# Computing expression 2:
b1 = torch.tensor([3, 5, 7])
b2 = torch.tensor([2, 4, 8])
## Uncomment to test your function
# b = dot_product(b1, b2)
# print(b)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_00491ea4.py) ```tensor(82)``` Section 2.3 Manipulating Tensors in Pytorch
###Code
# @title Video 5: Tensor Indexing
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BM4y1K7pD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0d0KSJ3lJbg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 5: Tensor Indexing')
display(out)
###Output
_____no_output_____
###Markdown
**Indexing**Just as in numpy, elements in a tensor can be accessed by index. As in any numpy array, the first element has index 0 and ranges are specified to include the first but before the last element. We can access elements according to their relative position to the end of the list by using negative indices. Indexing is also referred to as slicing.For example, [-1] selects the last element; [1:3] selects the second and the third elements, and [:-2] will select all elements excluding the last and second-to-last elements.
###Code
x = torch.arange(0, 10)
print(x)
print(x[-1])
print(x[1:3])
print(x[:-2])
###Output
_____no_output_____
###Markdown
When we have multidimensional tensors, indexing rules work the same way as numpy.
###Code
# make a 5D tensor
x = torch.rand(1, 2, 3, 4, 5)
print(f" shape of x[0]:{x[0].shape}")
print(f" shape of x[0][0]:{x[0][0].shape}")
print(f" shape of x[0][0][0]:{x[0][0][0].shape}")
###Output
_____no_output_____
###Markdown
**Flatten and reshape**There are various methods for reshaping tensors. It is common to have to express 2D data in 1D format. Similarly, it is also common to have to reshape a 1D tensor into a 2D tensor. We can achieve this with the ```.flatten()``` and ```.reshape()``` methods.
###Code
z = torch.arange(12).reshape(6, 2)
print(f"Original z: \n {z}")
# 2D -> 1D
z = z.flatten()
print(f"Flattened z: \n {z}")
# and back to 2D
z = z.reshape(3, 4)
print(f"Reshaped (3x4) z: \n {z}")
###Output
_____no_output_____
###Markdown
You will also see the ```.view()``` methods used a lot to reshape tensors. There is a subtle difference between ```.view()``` and ```.reshape()```, though for now we will just use ```.reshape()```. The documentation can be found in the appendix. **Squeezing tensors**When processing batches of data, you will quite often be left with singleton dimensions. e.g. [1,10] or [256, 1, 3]. This dimension can quite easily mess up your matrix operations if you don't plan on it being there...In order to compress tensors along their singleton dimensions we can use the ```.squeeze()``` method. We can use the ```.unsqueeze()``` method to do the opposite.
###Code
x = torch.randn(1, 10)
# printing the zeroth element of the tensor will not give us the first number!
print(x.shape)
print(f"x[0]: {x[0]}")
###Output
_____no_output_____
###Markdown
Because of that pesky singleton dimension, x[0] gave us the first row instead!
###Code
# lets get rid of that singleton dimension and see what happens now
x = x.squeeze(0)
print(x.shape)
print(f"x[0]: {x[0]}")
# adding singleton dimensions works a similar way, and is often used when tensors
# being added need same number of dimensions
y = torch.randn(5, 5)
print(f"shape of y: {y.shape}")
# lets insert a singleton dimension
y = y.unsqueeze(1)
print(f"shape of y: {y.shape}")
###Output
_____no_output_____
###Markdown
**Permutation**Sometimes our dimensions will be in the wrong order! For example, we may be dealing with RGB images with dim [3x48x64], but our pipeline expects the colour dimension to be the last dimension i.e. [48x64x3]. To get around this we can use ```.permute()```
###Code
# `x` has dimensions [color,image_height,image_width]
x = torch.rand(3, 48, 64)
# we want to permute our tensor to be [ image_height , image_width , color ]
x = x.permute(1, 2, 0)
# permute(1,2,0) means:
# the 0th dim of my new tensor = the 1st dim of my old tensor
# the 1st dim of my new tensor = the 2nd
# the 2nd dim of my new tensor = the 0th
print(x.shape)
###Output
_____no_output_____
###Markdown
You may also see ```.transpose()``` used. This works in a similar way as permute, but can only swap two dimensions at once. **Concatenation** In this example, we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). We can see that the first output tensor’s axis-0 length ( 6 ) is the sum of the two input tensors’ axis-0 lengths ( 3+3 ); while the second output tensor’s axis-1 length ( 8 ) is the sum of the two input tensors’ axis-1 lengths ( 4+4 ).
###Code
# Create two tensors of the same shape
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#concatenate them along rows
cat_rows = torch.cat((x, y), dim=0)
# concatenate along columns
cat_cols = torch.cat((x, y), dim=1)
# printing outputs
print('Concatenated by rows: shape{} \n {}'.format(list(cat_rows.shape), cat_rows))
print('\n Concatenated by colums: shape{} \n {}'.format(list(cat_cols.shape), cat_cols))
###Output
_____no_output_____
###Markdown
**Conversion to Other Python Objects**Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.When converting to a numpy array, the information being tracked by the tensor will be lost i.e. the computational graph. This will be covered in detail when you are introduced to autograd tomorrow!
###Code
x = torch.randn(5)
print(f"x: {x} | x type: {x.type()}")
y = x.numpy()
print(f"y: {y} | y type: {type(y)}")
z = torch.tensor(y)
print(f"z: {z} | z type: {z.type()}")
###Output
_____no_output_____
###Markdown
To convert a size-1 tensor to a Python scalar, we can invoke the item function or Python’s built-in functions.
###Code
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.3: Manipulating TensorsUsing a combination of the methods discussed above, complete the functions below. **Function A** This function takes in two 2D tensors $A$ and $B$ and returns the column sum of A multiplied by the sum of all the elmements of $B$ i.e. a scalar, e.g.,:$ A = \begin{bmatrix}1 & 1 \\1 & 1 \end{bmatrix} \,$and$ B = \begin{bmatrix}1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix} 2 & 2 \\\end{bmatrix} \cdot 12 = \begin{bmatrix}24 & 24\\\end{bmatrix}$**Function B** This function takes in a square matrix $C$ and returns a 2D tensor consisting of a flattened $C$ with the index of each element appended to this tensor in the row dimension, e.g.,:$ C = \begin{bmatrix}2 & 3 \\-1 & 10 \end{bmatrix} \,$so$ \, Out = \begin{bmatrix}0 & 2 \\1 & 3 \\2 & -1 \\3 & 10\end{bmatrix}$**Hint:** pay close attention to singleton dimensions**Function C**This function takes in two 2D tensors $D$ and $E$. If the dimensions allow it, this function returns the elementwise sum of $D$-shaped $E$, and $D$; else this function returns a 1D tensor that is the concatenation of the two tensors, e.g.,:$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix} \,$and $ E = \begin{bmatrix}2 & 3 & 0 & 2 \\\end{bmatrix} \, $so$ \, Out = \begin{bmatrix}3 & 2 \\-1 & 5 \end{bmatrix}$$ D = \begin{bmatrix}1 & -1 \\-1 & 3 \end{bmatrix}$and$ \, E = \begin{bmatrix}2 & 3 & 0 \\\end{bmatrix} \,$so$ \, Out = \begin{bmatrix}1 & -1 & -1 & 3 & 2 & 3 & 0 \end{bmatrix}$**Hint:** `torch.numel()` is an easy way of finding the number of elements in a tensor
###Code
def functionA(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`
and returns the column sum of
`my_tensor1` multiplied by the sum of all the elmements of `my_tensor2`,
i.e., a scalar.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
The multiplication of the column sum of `my_tensor1` by the sum of
`my_tensor2`.
"""
################################################
## TODO for students: complete functionA
raise NotImplementedError("Student exercise: complete function A")
################################################
# TODO multiplication the sum of the tensors
output = ...
return output
def functionB(my_tensor):
"""
This function takes in a square matrix `my_tensor` and returns a 2D tensor
consisting of a flattened `my_tensor` with the index of each element
appended to this tensor in the row dimension.
Args:
my_tensor: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function B")
################################################
# TODO flatten the tensor `my_tensor`
my_tensor = ...
# TODO create the idx tensor to be concatenated to `my_tensor`
idx_tensor = ...
# TODO concatenate the two tensors
output = ...
return output
def functionC(my_tensor1, my_tensor2):
"""
This function takes in two 2D tensors `my_tensor1` and `my_tensor2`.
If the dimensions allow it, it returns the
elementwise sum of `my_tensor1`-shaped `my_tensor2`, and `my_tensor2`;
else this function returns a 1D tensor that is the concatenation of the
two tensors.
Args:
my_tensor1: torch.Tensor
my_tensor2: torch.Tensor
Retuns:
output: torch.Tensor
Concatenated tensor.
"""
################################################
## TODO for students: complete functionB
raise NotImplementedError("Student exercise: complete function C")
################################################
# TODO check we can reshape `my_tensor2` into the shape of `my_tensor1`
if ...:
# TODO reshape `my_tensor2` into the shape of `my_tensor1`
my_tensor2 = ...
# TODO sum the two tensors
output = ...
else:
# TODO flatten both tensors
my_tensor1 = ...
my_tensor2 = ...
# TODO concatenate the two tensors in the correct dimension
output = ...
return output
# add timing to airtable
atform.add_event('Coding Exercise 2.3: Manipulating Tensors')
## Implement the functions above and then uncomment the following lines to test your code
# print(functionA(torch.tensor([[1, 1], [1, 1]]), torch.tensor([[1, 2, 3], [1, 2, 3]])))
# print(functionB(torch.tensor([[2, 3], [-1, 10]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0, 2]])))
# print(functionC(torch.tensor([[1, -1], [-1, 3]]), torch.tensor([[2, 3, 0]])))
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_ea1718cb.py) ```tensor([24, 24])tensor([[ 0, 2], [ 1, 3], [ 2, -1], [ 3, 10]])tensor([[ 3, 2], [-1, 5]])tensor([ 1, -1, -1, 3, 2, 3, 0])``` Section 2.4: GPUs
###Code
# @title Video 6: GPU vs CPU
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1nM4y1K7qx", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9Mc9GFUtILY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 6: GPU vs CPU')
display(out)
###Output
_____no_output_____
###Markdown
By default, when we create a tensor it will *not* live on the GPU!
###Code
x = torch.randn(10)
print(x.device)
###Output
_____no_output_____
###Markdown
When using Colab notebooks by default will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page. By following Runtime -> Change runtime type and selecting "GPU" from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs.Once you have done this your runtime will restart and you will need to rerun the first setup cell to reimport PyTorch. Then proceed to the next cell.(For more information on the GPU usage policy you can view in the appendix) **Now we have a GPU** The cell below should return True.
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python. *NOTE I am assuming that GPU stuff might be covered in more detail on another day but there could be a bit more detail here.*In short, we get the power of parallising our tensor computations on GPUs, whilst only writing (relatively) simple Python!Here, we define the function `set_device`, which returns the device use in the notebook, i.e., `cpu` or `cuda`. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as```pythonDEVICE = set_device()```Let's define the function using the PyTorch package `torch.cuda`, which is lazily initialized, so we can always import it, and use `is_available()` to determine if our system supports CUDA.
###Code
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
###Output
_____no_output_____
###Markdown
Let's make some CUDA tensors!
###Code
# common device agnostic way of writing code that can run on cpu OR gpu
# that we provide for you in each of the tutorials
DEVICE = set_device()
# we can specify a device when we first create our tensor
x = torch.randn(2, 2, device=DEVICE)
print(x.dtype)
print(x.device)
# we can also use the .to() method to change the device a tensor lives on
y = torch.randn(2, 2)
print(f"y before calling to() | device: {y.device} | dtype: {y.type()}")
y = y.to(DEVICE)
print(f"y after calling to() | device: {y.device} | dtype: {y.type()}")
###Output
_____no_output_____
###Markdown
**Operations between cpu tensors and cuda tensors**Note that the type of the tensor changed after calling ```.to()```. What happens if we try and perform operations on tensors on devices?
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
# Uncomment the following line and run this cell
# z = x + y
###Output
_____no_output_____
###Markdown
We cannot combine cuda tensors and cpu tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the `.to()` method as before, or the `.cpu()` and `.cuda()` methods. Note that using the `.cuda()` will throw an error if CUDA is not enabled in your machine.Genrally in this course all Deep learning is done on the GPU and any computation is done on the CPU, so sometimes we have to pass things back and forth so you'll see us call.
###Code
x = torch.tensor([0, 1, 2], device=DEVICE)
y = torch.tensor([3, 4, 5], device="cpu")
z = torch.tensor([6, 7, 8], device=DEVICE)
# moving to cpu
x = x.to("cpu") # alternatively, you can use x = x.cpu()
print(x + y)
# moving to gpu
y = y.to(DEVICE) # alternatively, you can use y = y.cuda()
print(y + z)
###Output
_____no_output_____
###Markdown
Coding Exercise 2.4: Just how much faster are GPUs?Below is a simple function `simpleFun`. Complete this function, such that it performs the operations:- elementwise multiplication- matrix multiplicationThe operations should be able to perfomed on either the CPU or GPU specified by the parameter `device`. We will use the helper function `timeFun(f, dim, iterations, device)`.
###Code
dim = 10000
iterations = 1
def simpleFun(dim, device):
"""
Args:
dim: integer
device: "cpu" or "cuda"
Returns:
Nothing.
"""
###############################################
## TODO for students: recreate the function, but
## ensure all computations happens on the `device`
raise NotImplementedError("Student exercise: fill in the missing code to create the tensors")
###############################################
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
x = ...
# 2D tensor filled with uniform random numbers in [0,1), dim x dim
y = ...
# 2D tensor filled with the scalar value 2, dim x dim
z = ...
# elementwise multiplication of x and y
a = ...
# matrix multiplication of x and y
b = ...
del x
del y
del z
del a
del b
## TODO: Implement the function above and uncomment the following lines to test your code
# timeFun(f=simpleFun, dim=dim, iterations=iterations)
# timeFun(f=simpleFun, dim=dim, iterations=iterations, device=DEVICE)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_232a94a4.py) Sample output (depends on your hardware)```time taken for 1 iterations of simpleFun(10000, cpu): 23.74070time taken for 1 iterations of simpleFun(10000, cuda): 0.87535``` **Discuss!**Try and reduce the dimensions of the tensors and increase the iterations. You can get to a point where the cpu only function is faster than the GPU function. Why might this be? Section 2.5: Datasets and Dataloaders
###Code
# @title Video 7: Getting Data
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1744y127SQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"LSkjPM1gFu0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 7: Getting Data')
display(out)
###Output
_____no_output_____
###Markdown
When training neural network models you will be working with large amounts of data. Fortunately, PyTorch offers some great tools that help you organize and manipulate your data samples.
###Code
# Import dataset and dataloaders related packages
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, Grayscale
###Output
_____no_output_____
###Markdown
**Datasets**The `torchvision` package gives you easy access to many of the publicly available datasets. Let's load the [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset, which contains color images of 10 different classes, like vehicles and animals.Creating an object of type `datasets.CIFAR10` will automatically download and load all images from the dataset. The resulting data structure can be treated as a list containing data samples and their corresponding labels.
###Code
# Download and load the images from the CIFAR10 dataset
cifar10_data = datasets.CIFAR10(
root="data", # path where the images will be stored
download=True, # all images should be downloaded
transform=ToTensor() # transform the images to tensors
)
# Print the number of samples in the loaded dataset
print(f"Number of samples: {len(cifar10_data)}")
print(f"Class names: {cifar10_data.classes}")
###Output
_____no_output_____
###Markdown
We have 50000 samples loaded. Now let's take a look at one of them in detail. Each sample consists of an image and its corresponding label.
###Code
# Choose a random sample
random.seed(2021)
image, label = cifar10_data[random.randint(0, len(cifar10_data))]
print(f"Label: {cifar10_data.classes[label]}")
print(f"Image size: {image.shape}")
###Output
_____no_output_____
###Markdown
Color images are modeled as 3 dimensional tensors. The first dimension corresponds to the channels (C) of the image (in this case we have RGB images). The second dimensions is the height (H) of the image and the third is the width (W). We can denote this image format as C × H × W. Coding Exercise 2.5: Display an image from the datasetLet's try to display the image using `matplotlib`. The code below will not work, because `imshow` expects to have the image in a different format - $H \times W \times C$.You need to reorder the dimensions of the tensor using the `permute` method of the tensor. PyTorch `torch.permute(*dims)` rearranges the original tensor according to the desired ordering and returns a new multidimensional rotated tensor. The size of the returned tensor remains the same as that of the original.**Code hint:**```python create a tensor of size 2 x 4input_var = torch.randn(2, 4) print its size and the tensorprint(input_var.size())print(input_var) dimensions permutedinput_var = input_var.permute(1, 0) print its size and the permuted tensorprint(input_var.size())print(input_var)```
###Code
# TODO: Uncomment the following line to see the error that arises from the current image format
# plt.imshow(image)
# TODO: Comment the above line and fix this code by reordering the tensor dimensions
# plt.imshow(image.permute(...))
# plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_b04bd357.py)*Example output:*
###Code
#@title Video 8: Train and Test
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1rV411H7s5", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JokSIuPs-ys", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 8: Train and Test')
display(out)
###Output
_____no_output_____
###Markdown
**Training and Test Datasets**When loading a dataset, you can specify if you want to load the training or the test samples using the `train` argument. We can load the training and test datasets separately. For simplicity, today we will not use both datasets separately, but this topic will be adressed in the next days.
###Code
# Load the training samples
training_data = datasets.CIFAR10(
root="data",
train=True,
download=True,
transform=ToTensor()
)
# Load the test samples
test_data = datasets.CIFAR10(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# @title Video 9: Data Augmentation - Transformations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19B4y1N77t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"sjegA9OBUPw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 9: Data Augmentation - Transformations')
display(out)
###Output
_____no_output_____
###Markdown
**Dataloader**Another important concept is the `Dataloader`. It is a wrapper around the `Dataset` that splits it into minibatches (important for training the neural network) and makes the data iterable. The `shuffle` argument is used to shuffle the order of the samples across the minibatches.
###Code
# Create dataloaders with
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
*Reproducibility:* DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use `worker_init_fn()` and a `generator` to preserve reproducibility:```pythondef seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed)g_seed = torch.Generator()g_seed.manual_seed(my_seed)DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker, generator=g_seed )``` **Note:** For the `seed_worker` to have an effect, `num_workers` should be 2 or more. We can now query the next batch from the data loader and inspect it. For this we need to convert the dataloader object to a Python iterator using the function `iter` and then we can query the next batch using the function `next`.We can now see that we have a 4D tensor. This is because we have a 64 images in the batch ($B$) and each image has 3 dimensions: channels ($C$), height ($H$) and width ($W$). So, the size of the 4D tensor is $B \times C \times H \times W$.
###Code
# Load the next batch
batch_images, batch_labels = next(iter(train_dataloader))
print('Batch size:', batch_images.shape)
# Display the first image from the batch
plt.imshow(batch_images[0].permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
**Transformations**Another useful feature when loading a dataset is applying transformations on the data - color conversions, normalization, cropping, rotation etc. There are many predefined transformations in the `torchvision.transforms` package and you can also combine them using the `Compose` transform. Checkout the [pytorch documentation](https://pytorch.org/vision/stable/transforms.html) for details. Coding Exercise 2.6: Load the CIFAR10 dataset as grayscale imagesThe goal of this excercise is to load the images from the CIFAR10 dataset as grayscale images. Note that we rerun the `set_seed` function to ensure reproducibility.
###Code
def my_data_load():
###############################################
## TODO for students: load the CIFAR10 data,
## but as grayscale images and not as RGB colored.
raise NotImplementedError("Student exercise: fill in the missing code to load the data")
###############################################
## TODO Load the CIFAR10 data using a transform that converts the images to grayscale tensors
data = datasets.CIFAR10(...,
transform=...)
# Display a random grayscale image
image, label = data[random.randint(0, len(data))]
plt.imshow(image.squeeze(), cmap="gray")
plt.show()
return data
set_seed(seed=2021)
## After implementing the above code, uncomment the following lines to test your code
# data = my_data_load()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_6052d728.py)*Example output:* --- Section 3: Neural Networks*Time estimate: ~1 hour 30 mins (excluding movie)* Now it's time for you to create your first neural network using PyTorch. This section will walk you through the process of:- Creating a simple neural network model- Training the network- Visualizing the results of the network- Tweeking the network
###Code
# @title Video 10: CSV Files
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1xy4y1T7kv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JrC_UAJWYKU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 10: CSV Files')
display(out)
###Output
_____no_output_____
###Markdown
Section 3.1: Data LoadingFirst we need some sample data to train our network on. You can use the function below to generate an example dataset consisting of 2D points along two interleaving half circles. The data will be stored in a file called `sample_data.csv`. You can inspect the file directly in Colab by going to Files on the left side and opening the CSV file.
###Code
# @title Generate sample data
# @markdown we used `scikit-learn` module
from sklearn.datasets import make_moons
# Create a dataset of 256 points with a little noise
X, y = make_moons(256, noise=0.1)
# Store the data as a Pandas data frame and save it to a CSV file
df = pd.DataFrame(dict(x0=X[:,0], x1=X[:,1], y=y))
df.to_csv('sample_data.csv')
###Output
_____no_output_____
###Markdown
Now we can load the data from the CSV file using the Pandas library. Pandas provides many functions for reading files in various formats. When loading data from a CSV file, we can reference the columns directly by their names.
###Code
# Load the data from the CSV file in a Pandas DataFrame
data = pd.read_csv("sample_data.csv")
# Create a 2D numpy array from the x0 and x1 columns
X_orig = data[["x0", "x1"]].to_numpy()
# Create a 1D numpy array from the y column
y_orig = data["y"].to_numpy()
# Print the sizes of the generated 2D points X and the corresponding labels Y
print(f"Size X:{X_orig.shape}")
print(f"Size y:{y_orig.shape}")
# Visualize the dataset. The color of the points is determined by the labels `y_orig`.
plt.scatter(X_orig[:, 0], X_orig[:, 1], s=40, c=y_orig)
plt.show()
###Output
_____no_output_____
###Markdown
**Prepare Data for PyTorch**Now let's prepare the data in a format suitable for PyTorch - convert everything into tensors.
###Code
# Initialize the device variable
DEVICE = set_device()
# Convert the 2D points to a float32 tensor
X = torch.tensor(X_orig, dtype=torch.float32)
# Upload the tensor to the device
X = X.to(DEVICE)
print(f"Size X:{X.shape}")
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
# Upload the tensor to the device
y = y.to(DEVICE)
print(f"Size y:{y.shape}")
###Output
_____no_output_____
###Markdown
Section 3.2: Create a Simple Neural Network
###Code
# @title Video 11: Generating the Neural Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1fK4y1M74a", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"PwSzRohUvck", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 11: Generating the Neural Network')
display(out)
###Output
_____no_output_____
###Markdown
For this example we want to have a simple neural network consisting of 3 layers:- 1 input layer of size 2 (our points have 2 coordinates)- 1 hidden layer of size 16 (you can play with different numbers here)- 1 output layer of size 2 (we want the have the scores for the two classes)During the course you will deal with differend kinds of neural networks. On Day 2 we will focus on linear networks, but you will work with some more complicated architectures in the next days. The example here is meant to demonstrate the process of creating and training a neural network end-to-end.**Programing the Network**PyTorch provides a base class for all neural network modules called [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). You need to inherit from `nn.Module` and implement some important methods:`__init__`In the `__init__` method you need to define the structure of your network. Here you will specify what layers will the network consist of, what activation functions will be used etc.`forward`All neural network modules need to implement the `forward` method. It specifies the computations the network needs to do when data is passed through it.`predict`This is not an obligatory method of a neural network module, but it is a good practice if you want to quickly get the most likely label from the network. It calls the `forward` method and chooses the label with the highest score.`train`This is also not an obligatory method, but it is a good practice to have. The method will be used to train the network parameters and will be implemented later in the notebook.> Note that you can use the `__call__` method of a module directly and it will invoke the `forward` method: `net()` does the same as `net.forward()`.
###Code
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Train the neural network (will be implemented later)
def train(self, X, y):
pass
###Output
_____no_output_____
###Markdown
**Check that your network works**Create an instance of your model and visualize it
###Code
# Create new NaiveNet and transfer it to the device
model = NaiveNet().to(DEVICE)
# Print the structure of the network
print(model)
###Output
_____no_output_____
###Markdown
Coding Exercise 3.2: Classify some samplesNow let's pass some of the points of our dataset through the network and see if it works. You should not expect the network to actually classify the points correctly, because it has not been trained yet. The goal here is just to get some experience with the data structures that are passed to the forward and predict methods and their results.
###Code
## Get the samples
# X_samples = ...
# print("Sample input:\n", X_samples)
## Do a forward pass of the network
# output = ...
# print("\nNetwork output:\n", output)
## Predict the label of each point
# y_predicted = ...
# print("\nPredicted labels:\n", y_predicted)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D1_BasicsAndPytorch/solutions/W1D1_Tutorial1_Solution_af8ae0ff.py) ```Sample input: tensor([[ 0.9066, 0.5052], [-0.2024, 1.1226], [ 1.0685, 0.2809], [ 0.6720, 0.5097], [ 0.8548, 0.5122]], device='cuda:0')Network output: tensor([[ 0.1543, -0.8018], [ 2.2077, -2.9859], [-0.5745, -0.0195], [ 0.1924, -0.8367], [ 0.1818, -0.8301]], device='cuda:0', grad_fn=)Predicted labels: tensor([0, 0, 1, 0, 0], device='cuda:0')``` Section 3.3: Train Your Neural Network
###Code
# @title Video 12: Train the Network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1v54y1n7CS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4MIqnE4XPaA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 12: Train the Network')
display(out)
###Output
_____no_output_____
###Markdown
Now it is time to train your network on your dataset. Don't worry if you don't fully understand everything yet - we wil cover training in much more details in the next days. For now, the goal is just to see your network in action!You will usually implement the `train` method directly when implementing your class `NaiveNet`. Here, we will implement it as a function outside of the class in order to have it in a ceparate cell.
###Code
# @title Helper function to plot the decision boundary
# Code adapted from this notebook: https://jonchar.net/notebooks/Artificial-Neural-Network-with-Keras/
from pathlib import Path
def plot_decision_boundary(model, X, y, device):
# Transfer the data to the CPU
X = X.cpu().numpy()
y = y.cpu().numpy()
# Check if the frames folder exists and create it if needed
frames_path = Path("frames")
if not frames_path.exists():
frames_path.mkdir()
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
grid_points = np.c_[xx.ravel(), yy.ravel()]
grid_points = torch.from_numpy(grid_points).type(torch.FloatTensor)
Z = model.predict(grid_points.to(device)).cpu().numpy()
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.binary)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
###Output
_____no_output_____
###Markdown
**Plot the loss during training**Plot the loss during the training to see how it reduces and converges.
###Code
plt.plot(np.linspace(1, len(losses), len(losses)), losses)
plt.xlabel("Epoch")
plt.ylabel("Loss")
# @title Visualize the training process
# @markdown ### Execute this cell!
!pip install imageio --quiet
!pip install pathlib --quiet
import imageio
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import Image, display
from pathlib import Path
InteractiveShell.ast_node_interactivity = "all"
# Make a list with all images
images = []
for i in range(10):
filename = "frames/0"+str(i)+"000.png"
images.append(imageio.imread(filename))
# Save the gif
imageio.mimsave('frames/movie.gif', images)
gifPath = Path("frames/movie.gif")
with open(gifPath,'rb') as f:
display(Image(data=f.read(), format='png'))
# @title Video 13: Play with it
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Cq4y1W7BH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"_GGkapdOdSY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 13: Play with it')
display(out)
###Output
_____no_output_____
###Markdown
Exercise 3.3: Tweak your NetworkYou can now play around with the network a little bit to get a feeling of what different parameters are doing. Here are some ideas what you could try:- Increase or decrease the number of epochs for training- Increase or decrease the size of the hidden layer- Add one additional hidden layerCan you get the network to better fit the data?
###Code
# @title Video 14: XOR Widget
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mB4y1N7QS", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"oTr1nE2rCWg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add timing to airtable
atform.add_event('Video 14: XOR Widget')
display(out)
###Output
_____no_output_____
###Markdown
Exclusive OR (XOR) logical operation gives a true (`1`) output when the number of true inputs is odd. That is, a true output result if one, and only one, of the inputs to the gate is true. If both inputs are false (`0`) or both are true or false output results. Mathematically speaking, XOR represents the inequality function, i.e., the output is true if the inputs are not alike; otherwise, the output is false.In case of two inputs ($X$ and $Y$) the following truth table is applied:\begin{array}{ccc}X & Y & \text{XOR} \\\hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\\end{array}Here, with `0`, we denote `False`, and with `1` we denote `True` in boolean terms. Interactive Demo 3.3: Solving XORHere we use an open source and famous visualization widget developed by Tensorflow team available [here](https://github.com/tensorflow/playground).* Play with the widget and observe that you can not solve the continuous XOR dataset.* Now add one hidden layer with three units, play with the widget, and set weights by hand to solve this dataset perfectly.For the second part, you should set the weights by clicking on the connections and either type the value or use the up and down keys to change it by one increment. You could also do the same for the biases by clicking on the tiny square to each neuron's bottom left.Even though there are infinitely many solutions, a neat solution when $f(x)$ is ReLU is: \begin{equation} y = f(x_1)+f(x_2)-f(x_1+x_2)\end{equation}Try to set the weights and biases to implement this function after you played enough :)
###Code
# @markdown ###Play with the parameters to solve XOR
from IPython.display import HTML
HTML('<iframe width="1020" height="660" src="https://playground.arashash.com/#activation=relu&batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=&seed=0.91390&showTestData=false&discretize=false&percTrainData=90&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" allowfullscreen></iframe>')
# @markdown Do you think we can solve the discrete XOR (only 4 possibilities) with only 2 hidden units?
w1_min_xor = 'Select' #@param ['Select', 'Yes', 'No']
if w1_min_xor == 'No':
print("Correct!")
else:
print("How about giving it another try?")
###Output
_____no_output_____
###Markdown
--- Section 4: Ethics And Course Info
###Code
# @title Video 15: Ethics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Hw41197oB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Kt6JLi3rUFU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 16: Be a group
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1j44y1272h", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Sfp6--d_H1A", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Video 17: Syllabus
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iB4y1N7uQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cDvAqG_hAvQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Meet our lecturers:Week 1: the building blocks* [Konrad Kording](https://kordinglab.com)* [Andrew Saxe](https://www.saxelab.org/)* [Surya Ganguli](https://ganguli-gang.stanford.edu/)* [Ioannis Mitliagkas](http://mitliagkas.github.io/)* [Lyle Ungar](https://www.cis.upenn.edu/~ungar/)Week 2: making things work* [Alona Fyshe](https://webdocs.cs.ualberta.ca/~alona/)* [Alexander Ecker](https://eckerlab.org/)* [James Evans](https://sociology.uchicago.edu/directory/james-evans)* [He He](https://hhexiy.github.io/)* [Vikash Gilja](https://tnel.ucsd.edu/bio) and [Akash Srivastava](https://akashgit.github.io/)Week 3: more magic* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Jane Wang](http://www.janexwang.com/) and [Feryal Behbahani](https://feryal.github.io/)* [Tim Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php) and [Blake Richards](https://www.mcgill.ca/neuro/blake-richards-phd)* [Josh Vogelstein](https://jovo.me/) and [Vincenzo Lamonaco](https://www.vincenzolomonaco.com/)Now, go to the [visualization of ICLR papers](https://iclr.cc/virtual/2021/paper_vis.html). Read a few abstracts. Look at the various clusters. Where do you see yourself in this map? --- Submit to Airtable
###Code
# @title Video 18: Submission info
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1e44y127ti", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JwTn7ej2dq8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
This is Darryl, the Deep Learning Dapper Lion, and he's here to teach you about content submission to airtable. At the end of each tutorial there will be an Airtable Submission Cell. Run the cell to generate the airtable submission button and click on it to submit your information to airtable.if it is the last tutorial of the day your button will look like this and take you to the end of day survey: otherwise it look like this: It is critical that you push the submit button for every tutorial you run. even if you don't finish the tutorial, still submit!Submitting is the only way we can verify that you attempted each tutorial, which is critical for us to be able to track your progress. TL;DR: Basic tutorial workflow1. work through the tutorial, answering Think! questions and code exercises2. at end each tutorial, (even if tutorial incomplete) run the airtable submission code cell3. Push the submission button4. if the last tutorial of the day, Submission button will also take you to the end of the day survey on a new page. complete that and submit it. Submission FAQs: 1. What if I want to change my answers to previous discussion questions? > you are free to change and resubmit any of the answers and Think! questions as many times as you like. However, please only run the airtable submission code and click on the link once you are ready to submit.2. Okay, but what if I submitted my airtable anyway and reallly want to resubmit?> After making changes, you can re-run the airtable submission cell code cell. This will result in a second submission from you for the data. This will make Darryl sad as it will be more work for him to clean up the data later. 3. HELP! I accidentally ran the code to generate the airtable submission button before I was ready to submit! what do I do?> If you run the code to generate the link, anything that happens afterwards will not be captured. Complete the tutorial and make sure to re-run the airtable submission again when you are finished before pressing the submission button. 4. What if I want to work on this on my own later, should I wait to submit until I'm finished?> Please submit wherever you are at the end of the day. It's graet that you want to keep working on this, but it's important to see the places where we tried things that didn't quite work out, so we can fix them for next year. Finally, we try to keep the airtable code as hidden as possible, but if you ever see any calls to `atform` such as `atform.add_event()` in the coding exercises, just know that is for saving airtable information only. It will not affect the code that is being run around it in any way , so please do not modify, comment out, or worry about any of those lines of code.Now, lets try submitting today's course to Airtable by running the next cell and clicking the button when it appears.
###Code
# @title Airtable Submission Link
from IPython import display
display.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link to survey" style="width:410px"></a>
</div>""" )
###Output
_____no_output_____
###Markdown
--- Bonus - 60 years of Machine Learning Research in one Plotby [Hendrik Strobelt](http://hendrik.strobelt.com) (MIT-IBM Watson AI Lab) with support from Benjamin Hoover.In this notebook we visualize a subset* of 3,300 articles retreived from the AllenAI [S2ORC dataset](https://github.com/allenai/s2orc). We represent each paper by a position that is output of a dimensionality reduction method applied to a vector representation of each paper. The vector representation is output of a neural network.*The selection is very biased on the keywords and methodology we used to filter. Please see the details section to learn about what we did.
###Code
# @title Import `altair` and load the data
!pip install altair vega_datasets --quiet
import requests
import altair as alt # altair is defining data visualizations
# Source data files
# Position data file maps ID to x,y positions
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc.pos_umap_cosine_100_d0.1.json
POS_FILE = 'https://osf.io/qyrfn/download'
# original link: http://gltr.io/temp/ml_regexv1_cs_ma_citation+_99perc_clean.csv
# Metadata file maps ID to title, abstract, author,....
META_FILE = 'https://osf.io/vfdu6/download'
# data loading and wrangling
def load_data():
positions = pd.read_json(POS_FILE)
positions[['x', 'y']] = positions['pos'].to_list()
meta = pd.read_csv(META_FILE)
return positions.merge(meta, left_on='id', right_on='paper_id')
# load data
data = load_data()
# @title Define Visualization using ALtair
YEAR_PERIOD = "quinquennial" # @param
selection = alt.selection_multi(fields=[YEAR_PERIOD], bind='legend')
data[YEAR_PERIOD] = (data["year"] / 5.0).apply(np.floor) * 5
chart = alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count"]], width=800,
height=800).mark_circle(radius=2, opacity=0.2).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False, clamp=True, domain=list(range(1955,2020,5))),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
# size='citation_count',
# color="decade:O",
opacity=alt.condition(selection, alt.value(.8), alt.value(0.2)),
).add_selection(
selection
).interactive()
###Output
_____no_output_____
###Markdown
Lets look at the Visualization. Each dot represents one paper. Close dots mean that the respective papers are closer related than distant ones. The color indicates the 5-year period of when the paper was published. The dot size indicates the citation count (within S2ORC corpus) as of July 2020. The view is **interactive** and allows for three main interactions. Try them and play around.1. hover over a dot to see a tooltip (title, author)2. select a year in the legend (right) to filter dots2. zoom in/out with scroll -- double click resets view
###Code
chart
###Output
_____no_output_____
###Markdown
QuestionsBy playing around, can you find some answers to the follwing questions?1. Can you find topical clusters? What cluster might occur because of a filtering error?2. Can you see a temporal trend in the data and clusters?2. Can you determine when deep learning methods started booming ?3. Can you find the key papers that where written before the DL "winter" that define milestones for a cluster? (tipp: look for large dots of different color) MethodsHere is what we did:1. Filtering of all papers who fullfilled the criterria: - are categorized as `Computer Science` or `Mathematics` - one of the following keywords appearing in title or abstract: `"machine learning|artificial intelligence|neural network|(machine|computer) vision|perceptron|network architecture| RNN | CNN | LSTM | BLEU | MNIST | CIFAR |reinforcement learning|gradient descent| Imagenet "`2. per year, remove all papers that are below the 99 percentile of citation count in that year3. embed each paper by using abstract+title in SPECTER model4. project based on embedding using UMAP5. visualize using Altair Find Authors
###Code
# @title Edit the `AUTHOR_FILTER` variable to full text search for authors.
AUTHOR_FILTER = "Rush " # @param space at the end means "word border"
### Don't ignore case when searching...
FLAGS = 0
### uncomment do ignore case
# FLAGS = re.IGNORECASE
## --- FILTER CODE.. make it your own ---
import re
data['issel'] = data['authors'].str.contains(AUTHOR_FILTER, na=False, flags=FLAGS, )
if data['issel'].mean()<0.0000000001:
print('No match found')
## --- FROM HERE ON VIS CODE ---
alt.Chart(data[["x", "y", "authors", "title", YEAR_PERIOD, "citation_count", "issel"]], width=800,
height=800) \
.mark_circle(stroke="black", strokeOpacity=1).encode(
alt.Color(YEAR_PERIOD+':O',
scale=alt.Scale(scheme='viridis', reverse=False),
# legend=alt.Legend(title='Total Records')
),
alt.Size('citation_count',
scale=alt.Scale(type="pow", exponent=1, range=[15, 300])
),
alt.StrokeWidth('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[0, 2]), legend=None),
alt.Opacity('issel:Q', scale=alt.Scale(type="linear", domain=[0,1], range=[.2, 1]), legend=None),
alt.X('x:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
alt.Y('y:Q',
scale=alt.Scale(zero=False), axis=alt.Axis(labels=False)
),
tooltip=['title', 'authors'],
).interactive()
###Output
_____no_output_____ |
euclidean-lattices/Tutorial.ipynb | ###Markdown
Euclidean Lattices in SagemathWith this tutorial we will have a brief introduction to Euclidean lattices in [Sagemath](https://doc.sagemath.org/html/fr/a_tour_of_sage/). IntroductionIf you are already familiar with Sagemath or Python, move forward to the Exercises section.Otherwise, here you have some useful tools.*Autocomplete:* Using the TAB key to autocomplete or see the possible methods.- Run the following cell (Shift + Enter or the button Run on the toolbar)
###Code
a = matrix([[1,2],[3,4]]); a #Matrices are seen as lists of lists, and given by rows
###Output
_____no_output_____
###Markdown
- In the following cell type `a.` and press the TAB key ( ->| ) to see all the methods associated to a matrix. - To learn more about a given method, type its name followed by a question mark and then run the cell.
###Code
a.adjoint?
###Output
_____no_output_____
###Markdown
Exercises**Exercise 1** Implement the Gauss reduction algorithm (see Exercise 7 [here](https://anna-somoza.github.io/euclidean-lattices/Exercises-v2.pdf)).The function `while` will be useful. For the elements *u, v* use the class `vector`, which has a function `norm` and the scalar product with `*`.
###Code
def Gauss_reduction(u, v):
#Implement the algorithm here
#Some matrices that you can use to test your implementation.
B1 = matrix([[6, 1],[6,0]])
n = 2
B2 = random_matrix(ZZ, n)
###Output
_____no_output_____
###Markdown
**Exercise 2** Implement the LLL algorithm as given in class (see Algorithm 2 [here](https://anna-somoza.github.io/euclidean-lattices/Theory-v2.pdf)).As input we will have a matrix `B` with the basis vectors as **rows**. To compute the coefficients $\mu_{ij}$ implement the GSO algorithm, or check the method `B.gram_schmidt()` if you want to save time.The matrix functions `ncols`, `set_row` and `swap_rows` could be useful.
###Code
def LLL(B):
#Implement the algorithm here
###Output
_____no_output_____
###Markdown
Read the documentation for the function `B.LLL()`, and compare its output with the one given by your implementation.
###Code
#Some matrices that you can use to test your implementation.
B1 = matrix([[1,2,3],[2,3,4],[4,3,3]])
n = 3
B2 = random_matrix(ZZ, n)
###Output
_____no_output_____ |
my_notebooks/conv_visualization.ipynb | ###Markdown
Artificial Intelligence Nanodegree Convolutional Neural Networks---In this notebook, we visualize four activation maps in a CNN layer. 1. Import the Image
###Code
import cv2
import scipy.misc
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'images/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# resize to smaller
small_img = scipy.misc.imresize(gray_img, 0.3)
# rescale entries to lie in [0,1]
small_img = small_img.astype("float32")/255
# plot image
plt.imshow(small_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
2. Specify the Filters
###Code
import numpy as np
# TODO: Feel free to modify the numbers here, to try out another filter!
# Please don't change the size of the array ~ :D
#filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
filter_vals = np.array([[-1, -1, -1, -1], [-1, -1, 1, 1], [-1, 1, 1, 1], [-1, 1, 1, 1]])
### do not modify the code below this line ###
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = [filter_1, filter_2, filter_3, filter_4]
# visualize all filters
fig = plt.figure(figsize=(10, 5))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
width, height = filters[i].shape
for x in range(width):
for y in range(height):
ax.annotate(str(filters[i][x][y]), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if filters[i][x][y]<0 else 'black')
###Output
_____no_output_____
###Markdown
3. Visualize the Activation Maps for Each Filter
###Code
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D
import matplotlib.cm as cm
# plot image
plt.imshow(small_img, cmap='gray')
# define a neural network with a single convolutional layer with one filter
model = Sequential()
model.add(Convolution2D(1, (4, 4), activation='relu', input_shape=(small_img.shape[0], small_img.shape[1], 1)))
# apply convolutional filter and return output
def apply_filter(img, index, filter_list, ax):
# set the weights of the filter in the convolutional layer to filter_list[i]
model.layers[0].set_weights([np.reshape(filter_list[i], (4,4,1,1)), np.array([0])])
# plot the corresponding activation map
ax.imshow(np.squeeze(model.predict(np.reshape(img, (1, img.shape[0], img.shape[1], 1)))), cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# visualize all activation maps
fig = plt.figure(figsize=(20, 20))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
apply_filter(small_img, i, filters, ax)
ax.set_title('Activation Map for Filter %s' % str(i+1))
###Output
_____no_output_____ |
Lectures/Lecture2/lecture2-numpy-arrays.ipynb | ###Markdown
Numpy Arrays
###Code
This notebook gives a brief introduction to numpy arrays.
J. Portes April 13, 2020
import numpy as np
###Output
_____no_output_____
###Markdown
1-Dimensional Arrays
###Code
a = [1,2,3,4]
print('a: ',a)
b = np.array(a)
print('b: ',b)
b
###Output
_____no_output_____
###Markdown
Addition
###Code
# adding two lists
[1,2,3,4] + [1,2,3,4]
# adding two lists
[1,2,3,4] + [1,2,3,4]
# adding two numpy arrays
np.array([1,2,3,4]) + np.array([1,2,3,4])
# adding numpy array and a list
np.array([1,2,3,4]) + [1,2,3,4]
###Output
_____no_output_____
###Markdown
Multiplication
###Code
# multiplying a list
[1,2,3,4] * 2
# multiplying a list
[1,2,3,4] * 2
# multiplying an ndarray
np.array([1,2,3,4]) * 2
# multiplying an ndarray
np.array([1,2,3,4]) * 2
# multiplying two lists
[1,2,3,4] * [1,2,3,4]
# multiplying two lists
[1,2,3,4] * [1,2,3,4]
# multiplying two numpy arrays - this is element-wise
np.array([1,2,3,4]) * np.array([1,2,3,4])
###Output
_____no_output_____
###Markdown
N-Dimensional Arrays
###Code
# 2 x 3 matrix
d1 = np.array([[1,2,3],[4,5,6]])
d1
d1.shape
# 3 x 2 matrix
d2 = np.array([[1,2],[3,4],[5,6]])
d2
###Output
_____no_output_____
###Markdown
Element-wise Multiplication
###Code
# 2 x 3 matrix
d1 = np.array([[1,2,3],[4,5,6]])
d2 = np.array([[1,2],[3,4],[5,6]])
# element-wise multiplication
# (both matrices need to be the same shape)
d1*d1
# can you do element-wise multiplication if
# d1 and d2 have different shapes?
d1*d2
# can you do element-wise multiplication if
# d1 and d2 have different shapes?
d1*d2
###Output
_____no_output_____
###Markdown
Dot product
###Code
# dot product
d3 = np.dot(d1,d2)
d3
###Output
_____no_output_____
###Markdown
Mathematical Functions
###Code
# square
d3**2
# square root
np.sqrt(d3)
# exponential
np.exp(d3)
###Output
_____no_output_____
###Markdown
`linspace` and `arange`
###Code
x1 = np.linspace(0,1,10) # split into 10
x1
x1 = np.linspace(0,0.9,10) # split into 9
x1
x2 = np.arange(0,1,.1) # increase by 1
x2
z = np.ones(20)
z
z = np.zeros((5,2)) # create 2d array
z
###Output
_____no_output_____
###Markdown
getting fancy
###Code
x2 = np.arange(0,1,.1) # increase by 1
w = 1
y = np.sin(2*np.pi*w*x2)
y
###Output
_____no_output_____
###Markdown
indexing
###Code
x = np.arange(10)
print('x: ',x)
x[2:5]
x[:-2]
x[1:7:2]
y = np.arange(35).reshape(5,7)
print('y: ',y)
y[1:5:2,::3]
###Output
_____no_output_____
###Markdown
Index arrays
###Code
x = np.arange(10,1,-1)
print('x: ',x)
x[np.array([3, 3, 1, 8])]
###Output
x: [10 9 8 7 6 5 4 3 2]
###Markdown
Boolean or “mask” index arrays
###Code
y = np.arange(35).reshape(5,7)
y
b = y>20
print('b: ',b)
print('y[b]: ',y[b])
###Output
b: [[False False False False False False False]
[False False False False False False False]
[False False False False False False False]
[ True True True True True True True]
[ True True True True True True True]]
y[b]: [21 22 23 24 25 26 27 28 29 30 31 32 33 34]
|
examples/.ipynb_checkpoints/example_cls_fr-checkpoint.ipynb | ###Markdown
SandBox CLS-FRJust a playground to test all the functions available now (version 0.1.1)
###Code
from SCBert.load_data import DataLoader
cls = DataLoader().load_cls_fr()
from SCBert.SCBert import Vectorizer
vectorizer = Vectorizer("flaubert_small")
data = cls.review
print("In that model you have {} layers".format(vectorizer.nb_layer))
text_vectors = vectorizer.vectorize(data, layers=[4,5], word_pooling_method="average", sentence_pooling_method="average")
###Output
100%|█████████▉| 99.59999999999984/100 [07:10<00:01, 4.32s/it]
###Markdown
Explore the data to find groups that make sense with our vector computer in the last step.
###Code
from SCBert.SCBert import EmbeddingExplorer
ee = EmbeddingExplorer(data,text_vectors)
%%time
labels = ee.cluster(3, cluster_algo="quick_k-means")
%%time
ee.explore_cls(cls.code, 'PCA')
%%time
ee.extract_keywords(num_top_words=15)
%%time
vectorizer = Vectorizer("flaubert_base")
ee = EmbeddingExplorer(data, "text_vectors.pt")
labels = ee.cluster(3, cluster_algo="quick_k-means")
ee.explore_cls(cls.code, 'PCA')
%%time
ee.extract_keywords(num_top_words=15)
ee.compute_coherence(vectorizer)
###Output
Cluster 0 with keywords :
["n'y" 'mal' 'cerveau' 'scènes' 'point' 'idéal' 'soirée' 'restera'
'panthéon' 'service' 'film' 'nombreux' "l'histoire" "l'enfance" 'vie']
has a coherence of 0.2975486146978854
Cluster 1 with keywords :
['vraiment' 'hauteur' 'meme' 'prix' 'souvent' 'cas' 'image' 'belle'
'lignée' "l'humour" 'grand' 'original' 'films' 'vie' 'prime']
has a coherence of 0.3607958937972219
Cluster 2 with keywords :
['retour' 'porté' 'toujours' 'grâce' 'pouvoir' 'jour' 'propose' 'plutôt'
'subir' 'sent' 'place' "l'aise" 'quitte' 'groupe' 'celle-ci']
has a coherence of 0.2934638176894868
|
d2l/mxnet/chapter_optimization/gd.ipynb | ###Markdown
梯度下降:label:`sec_gd`尽管*梯度下降*(gradient descent)很少直接用于深度学习,但了解它是理解下一节随机梯度下降算法的关键。例如,由于学习率过大,优化问题可能会发散,这种现象早已在梯度下降中出现。同样地,*预处理*(preconditioning)是梯度下降中的一种常用技术,还被沿用到更高级的算法中。让我们从简单的一维梯度下降开始。 一维梯度下降为什么梯度下降算法可以优化目标函数?一维中的梯度下降给我们很好的启发。考虑一类连续可微实值函数$f: \mathbb{R} \rightarrow \mathbb{R}$,利用泰勒展开,我们可以得到$$f(x + \epsilon) = f(x) + \epsilon f'(x) + \mathcal{O}(\epsilon^2).$$:eqlabel:`gd-taylor`即在一阶近似中,$f(x+\epsilon)$可通过$x$处的函数值$f(x)$和一阶导数$f'(x)$得出。我们可以假设在负梯度方向上移动的$\epsilon$会减少$f$。为了简单起见,我们选择固定步长$\eta > 0$,然后取$\epsilon = -\eta f'(x)$。将其代入泰勒展开式我们可以得到$$f(x - \eta f'(x)) = f(x) - \eta f'^2(x) + \mathcal{O}(\eta^2 f'^2(x)).$$:eqlabel:`gd-taylor-2`如果其导数$f'(x) \neq 0$没有消失,我们就能继续展开,这是因为$\eta f'^2(x)>0$。此外,我们总是可以令$\eta$小到足以使高阶项变得不相关。因此,$$f(x - \eta f'(x)) \lessapprox f(x).$$这意味着,如果我们使用$$x \leftarrow x - \eta f'(x)$$来迭代$x$,函数$f(x)$的值可能会下降。因此,在梯度下降中,我们首先选择初始值$x$和常数$\eta > 0$,然后使用它们连续迭代$x$,直到停止条件达成。例如,当梯度$|f'(x)|$的幅度足够小或迭代次数达到某个值时。下面我们来展示如何实现梯度下降。为了简单起见,我们选用目标函数$f(x)=x^2$。尽管我们知道$x=0$时$f(x)$能取得最小值,但我们仍然使用这个简单的函数来观察$x$的变化。
###Code
%matplotlib inline
from mxnet import np, npx
from d2l import mxnet as d2l
npx.set_np()
def f(x): # 目标函数
return x ** 2
def f_grad(x): # 目标函数的梯度(导数)
return 2 * x
###Output
_____no_output_____
###Markdown
接下来,我们使用$x=10$作为初始值,并假设$\eta=0.2$。使用梯度下降法迭代$x$共10次,我们可以看到,$x$的值最终将接近最优解。
###Code
def gd(eta, f_grad):
x = 10.0
results = [x]
for i in range(10):
x -= eta * f_grad(x)
results.append(float(x))
print(f'epoch 10, x: {x:f}')
return results
results = gd(0.2, f_grad)
###Output
epoch 10, x: 0.060466
###Markdown
对进行$x$优化的过程可以绘制如下。
###Code
def show_trace(results, f):
n = max(abs(min(results)), abs(max(results)))
f_line = np.arange(-n, n, 0.01)
d2l.set_figsize()
d2l.plot([f_line, results], [[f(x) for x in f_line], [
f(x) for x in results]], 'x', 'f(x)', fmts=['-', '-o'])
show_trace(results, f)
###Output
_____no_output_____
###Markdown
学习率:label:`subsec_gd-learningrate`*学习率*(learning rate)决定目标函数能否收敛到局部最小值,以及何时收敛到最小值。学习率$\eta$可由算法设计者设置。请注意,如果我们使用的学习率太小,将导致$x$的更新非常缓慢,需要更多的迭代。例如,考虑同一优化问题中$\eta = 0.05$的进度。如下所示,尽管经过了10个步骤,我们仍然离最优解很远。
###Code
show_trace(gd(0.05, f_grad), f)
###Output
epoch 10, x: 3.486784
###Markdown
相反,如果我们使用过高的学习率,$\left|\eta f'(x)\right|$对于一阶泰勒展开式可能太大。也就是说, :eqref:`gd-taylor`中的$\mathcal{O}(\eta^2 f'^2(x))$可能变得显著了。在这种情况下,$x$的迭代不能保证降低$f(x)$的值。例如,当学习率为$\eta=1.1$时,$x$超出了最优解$x=0$并逐渐发散。
###Code
show_trace(gd(1.1, f_grad), f)
###Output
epoch 10, x: 61.917364
###Markdown
局部最小值为了演示非凸函数的梯度下降,考虑函数$f(x) = x \cdot \cos(cx)$,其中$c$为某常数。这个函数有无穷多个局部最小值。根据我们选择的学习率,我们最终可能只会得到许多解的一个。下面的例子说明了(不切实际的)高学习率如何导致较差的局部最小值。
###Code
c = np.array(0.15 * np.pi)
def f(x): # 目标函数
return x * np.cos(c * x)
def f_grad(x): # 目标函数的梯度
return np.cos(c * x) - c * x * np.sin(c * x)
show_trace(gd(2, f_grad), f)
###Output
epoch 10, x: -1.528165
###Markdown
多元梯度下降现在我们对单变量的情况有了更好的理解,让我们考虑一下$\mathbf{x} = [x_1, x_2, \ldots, x_d]^\top$的情况。即目标函数$f: \mathbb{R}^d \to \mathbb{R}$将向量映射成标量。相应地,它的梯度也是多元的:它是一个由$d$个偏导数组成的向量:$$\nabla f(\mathbf{x}) = \bigg[\frac{\partial f(\mathbf{x})}{\partial x_1}, \frac{\partial f(\mathbf{x})}{\partial x_2}, \ldots, \frac{\partial f(\mathbf{x})}{\partial x_d}\bigg]^\top.$$梯度中的每个偏导数元素$\partial f(\mathbf{x})/\partial x_i$代表了当输入$x_i$时$f$在$\mathbf{x}$处的变化率。和先前单变量的情况一样,我们可以对多变量函数使用相应的泰勒近似来思考。具体来说,$$f(\mathbf{x} + \boldsymbol{\epsilon}) = f(\mathbf{x}) + \mathbf{\boldsymbol{\epsilon}}^\top \nabla f(\mathbf{x}) + \mathcal{O}(\|\boldsymbol{\epsilon}\|^2).$$:eqlabel:`gd-multi-taylor`换句话说,在$\boldsymbol{\epsilon}$的二阶项中,最陡下降的方向由负梯度$-\nabla f(\mathbf{x})$得出。选择合适的学习率$\eta > 0$来生成典型的梯度下降算法:$$\mathbf{x} \leftarrow \mathbf{x} - \eta \nabla f(\mathbf{x}).$$这个算法在实践中的表现如何呢?我们构造一个目标函数$f(\mathbf{x})=x_1^2+2x_2^2$,并有二维向量$\mathbf{x} = [x_1, x_2]^\top$作为输入,标量作为输出。梯度由$\nabla f(\mathbf{x}) = [2x_1, 4x_2]^\top$给出。我们将从初始位置$[-5, -2]$通过梯度下降观察$\mathbf{x}$的轨迹。我们还需要两个辅助函数:第一个是update函数,并将其应用于初始值20次;第二个函数会显示$\mathbf{x}$的轨迹。
###Code
def train_2d(trainer, steps=20, f_grad=None): #@save
"""用定制的训练机优化2D目标函数"""
# s1和s2是稍后将使用的内部状态变量
x1, x2, s1, s2 = -5, -2, 0, 0
results = [(x1, x2)]
for i in range(steps):
if f_grad:
x1, x2, s1, s2 = trainer(x1, x2, s1, s2, f_grad)
else:
x1, x2, s1, s2 = trainer(x1, x2, s1, s2)
results.append((x1, x2))
print(f'epoch {i + 1}, x1: {float(x1):f}, x2: {float(x2):f}')
return results
def show_trace_2d(f, results): #@save
"""显示优化过程中2D变量的轨迹"""
d2l.set_figsize()
d2l.plt.plot(*zip(*results), '-o', color='#ff7f0e')
x1, x2 = np.meshgrid(np.arange(-5.5, 1.0, 0.1),
np.arange(-3.0, 1.0, 0.1))
d2l.plt.contour(x1, x2, f(x1, x2), colors='#1f77b4')
d2l.plt.xlabel('x1')
d2l.plt.ylabel('x2')
###Output
_____no_output_____
###Markdown
接下来,我们观察学习率$\eta = 0.1$时优化变量$\mathbf{x}$的轨迹。可以看到,经过20步之后,$\mathbf{x}$的值接近其位于$[0, 0]$的最小值。虽然进展相当顺利,但相当缓慢。
###Code
def f_2d(x1, x2): # 目标函数
return x1 ** 2 + 2 * x2 ** 2
def f_2d_grad(x1, x2): # 目标函数的梯度
return (2 * x1, 4 * x2)
def gd_2d(x1, x2, s1, s2, f_grad):
g1, g2 = f_grad(x1, x2)
return (x1 - eta * g1, x2 - eta * g2, 0, 0)
eta = 0.1
show_trace_2d(f_2d, train_2d(gd_2d, f_grad=f_2d_grad))
###Output
epoch 20, x1: -0.057646, x2: -0.000073
###Markdown
自适应方法正如我们在 :numref:`subsec_gd-learningrate`中所看到的,选择“恰到好处”的学习率$\eta$是很棘手的。如果我们把它选得太小,就没有什么进展;如果太大,得到的解就会振荡,甚至可能发散。如果我们可以自动确定$\eta$,或者完全不必选择学习率,会怎么样?除了考虑目标函数的值和梯度、还考虑它的曲率的二阶方法可以帮我们解决这个问题。虽然由于计算代价的原因,这些方法不能直接应用于深度学习,但它们为如何设计高级优化算法提供了有用的思维直觉,这些算法可以模拟下面概述的算法的许多理想特性。 牛顿法回顾一些函数$f: \mathbb{R}^d \rightarrow \mathbb{R}$的泰勒展开式,事实上我们可以把它写成$$f(\mathbf{x} + \boldsymbol{\epsilon}) = f(\mathbf{x}) + \boldsymbol{\epsilon}^\top \nabla f(\mathbf{x}) + \frac{1}{2} \boldsymbol{\epsilon}^\top \nabla^2 f(\mathbf{x}) \boldsymbol{\epsilon} + \mathcal{O}(\|\boldsymbol{\epsilon}\|^3).$$:eqlabel:`gd-hot-taylor`为了避免繁琐的符号,我们将$\mathbf{H} \stackrel{\mathrm{def}}{=} \nabla^2 f(\mathbf{x})$定义为$f$的Hessian,是$d \times d$矩阵。当$d$的值很小且问题很简单时,$\mathbf{H}$很容易计算。但是对于深度神经网络而言,考虑到$\mathbf{H}$可能非常大,$\mathcal{O}(d^2)$个条目的存储代价会很高,此外通过反向传播进行计算可能雪上加霜。然而,我们姑且先忽略这些考量,看看会得到什么算法。毕竟,$f$的最小值满足$\nabla f = 0$。遵循 :numref:`sec_calculus`中的微积分规则,通过取$\boldsymbol{\epsilon}$对 :eqref:`gd-hot-taylor`的导数,再忽略不重要的高阶项,我们便得到$$\nabla f(\mathbf{x}) + \mathbf{H} \boldsymbol{\epsilon} = 0 \text{ and hence }\boldsymbol{\epsilon} = -\mathbf{H}^{-1} \nabla f(\mathbf{x}).$$也就是说,作为优化问题的一部分,我们需要将Hessian矩阵$\mathbf{H}$求逆。举一个简单的例子,对于$f(x) = \frac{1}{2} x^2$,我们有$\nabla f(x) = x$和$\mathbf{H} = 1$。因此,对于任何$x$,我们可以获得$\epsilon = -x$。换言之,单单一步就足以完美地收敛,而无须任何调整。我们在这里比较幸运:泰勒展开式是确切的,因为$f(x+\epsilon)= \frac{1}{2} x^2 + \epsilon x + \frac{1}{2} \epsilon^2$。让我们看看其他问题。给定一个凸双曲余弦函数$c$,其中$c$为某些常数,我们可以看到经过几次迭代后,得到了$x=0$处的全局最小值。
###Code
c = np.array(0.5)
def f(x): # O目标函数
return np.cosh(c * x)
def f_grad(x): # 目标函数的梯度
return c * np.sinh(c * x)
def f_hess(x): # 目标函数的Hessian
return c**2 * np.cosh(c * x)
def newton(eta=1):
x = 10.0
results = [x]
for i in range(10):
x -= eta * f_grad(x) / f_hess(x)
results.append(float(x))
print('epoch 10, x:', x)
return results
show_trace(newton(), f)
###Output
epoch 10, x: 0.0
###Markdown
现在让我们考虑一个非凸函数,比如$f(x) = x \cos(c x)$,$c$为某些常数。请注意在牛顿法中,我们最终将除以Hessian。这意味着如果二阶导数是负的,$f$的值可能会趋于增加。这是这个算法的致命缺陷!让我们看看实践中会发生什么。
###Code
c = np.array(0.15 * np.pi)
def f(x): # 目标函数
return x * np.cos(c * x)
def f_grad(x): # 目标函数的梯度
return np.cos(c * x) - c * x * np.sin(c * x)
def f_hess(x): # 目标函数的Hessian
return - 2 * c * np.sin(c * x) - x * c**2 * np.cos(c * x)
show_trace(newton(), f)
###Output
epoch 10, x: 26.834133
###Markdown
这发生了惊人的错误。我们怎样才能修正它?一种方法是用取Hessian的绝对值来修正,另一个策略是重新引入学习率。这似乎违背了初衷,但不完全是——拥有二阶信息可以使我们在曲率较大时保持谨慎,而在目标函数较平坦时则采用较大的学习率。让我们看看在学习率稍小的情况下它是如何生效的,比如$\eta = 0.5$。如我们所见,我们有了一个相当高效的算法。
###Code
show_trace(newton(0.5), f)
###Output
epoch 10, x: 7.26986
|
kalmanjax/notebooks/classification.ipynb | ###Markdown
Gaussian Process Classification via Kalman Smoothing Import and load data
###Code
import sys
sys.path.insert(0, '../')
import numpy as np
from jax.experimental import optimizers
import matplotlib.pyplot as plt
import time
from sde_gp import SDEGP
import approximate_inference as approx_inf
import priors
import likelihoods
from utils import softplus_list, plot
pi = 3.141592653589793
plot_intermediate = False
print('generating some data')
np.random.seed(99)
N = 1000 # number of training points
x = 100 * np.random.rand(N)
f = lambda x_: 6 * np.sin(pi * x_ / 10.0) / (pi * x_ / 10.0 + 1)
y_ = f(x) + np.math.sqrt(0.05)*np.random.randn(x.shape[0])
y = np.sign(y_)
y[y == -1] = 0
x_test = np.linspace(np.min(x)-5.0, np.max(x)+5.0, num=500)
y_test = np.sign(f(x_test) + np.math.sqrt(0.05)*np.random.randn(x_test.shape[0]))
y_test[y_test == -1] = 0
plt.figure(1, figsize=(12, 5))
plt.plot(x, y, 'b+', label='training observations')
plt.plot(x_test, y_test, 'r+', alpha=0.4, label='test observations')
plt.legend();
###Output
generating some data
###Markdown
Build the GP model
###Code
var_f = 1. # GP variance
len_f = 5.0 # GP lengthscale
prior = priors.Matern52(variance=var_f, lengthscale=len_f)
lik = likelihoods.Bernoulli(link='logit')
inf_method = approx_inf.ExpectationPropagation(power=0.9, intmethod='UT')
# inf_method = approx_inf.VariationalInference(intmethod='GH')
# inf_method = approx_inf.VariationalInference(intmethod='UT')
# inf_method = approx_inf.ExtendedEP(power=0)
# inf_method = approx_inf.ExtendedKalmanSmoother()
# inf_method = approx_inf.GaussHermiteKalmanSmoother()
# inf_method = approx_inf.StatisticallyLinearisedEP(intmethod='UT')
# inf_method = approx_inf.UnscentedKalmanSmoother()
model = SDEGP(prior=prior, likelihood=lik, t=x, y=y, approx_inf=inf_method)
###Output
/Users/wilkinw1/Library/Python/3.7/lib/python/site-packages/jax/lib/xla_bridge.py:116: UserWarning: No GPU/TPU found, falling back to CPU.
warnings.warn('No GPU/TPU found, falling back to CPU.')
###Markdown
Set up the optimiser
###Code
opt_init, opt_update, get_params = optimizers.adam(step_size=2e-1)
# parameters should be a 2-element list [param_prior, param_likelihood]
opt_state = opt_init([model.prior.hyp, model.likelihood.hyp])
def gradient_step(i, state, mod):
params = get_params(state)
mod.prior.hyp = params[0]
mod.likelihood.hyp = params[1]
# grad(Filter) + Smoother:
neg_log_marg_lik, gradients = mod.run()
# neg_log_marg_lik, gradients = mod.run_two_stage() # <-- less elegant but reduces compile time
prior_params = softplus_list(params[0])
if (i % 10) == 0:
print('iter %2d: var_f=%1.2f len_f=%1.2f, nlml=%2.2f' %
(i, prior_params[0], prior_params[1], neg_log_marg_lik))
if plot_intermediate:
plot(mod, i)
return opt_update(i, gradients, state)
###Output
_____no_output_____
###Markdown
Optimise the hyperparameters and site parameters
###Code
print('optimising the hyperparameters ...')
t0 = time.time()
for j in range(200):
opt_state = gradient_step(j, opt_state, model)
t1 = time.time()
print('optimisation time: %2.2f secs' % (t1-t0))
###Output
optimising the hyperparameters ...
iter 0: var_f=1.00 len_f=5.00, nlml=469.46
iter 10: var_f=2.50 len_f=3.62, nlml=452.80
iter 20: var_f=3.80 len_f=4.48, nlml=448.21
iter 30: var_f=4.78 len_f=5.34, nlml=446.25
iter 40: var_f=5.56 len_f=5.44, nlml=445.29
iter 50: var_f=6.19 len_f=5.41, nlml=444.81
iter 60: var_f=6.72 len_f=5.62, nlml=444.35
iter 70: var_f=7.20 len_f=5.80, nlml=443.96
iter 80: var_f=7.65 len_f=5.88, nlml=443.70
iter 90: var_f=8.07 len_f=5.96, nlml=443.47
iter 100: var_f=8.47 len_f=6.06, nlml=443.25
iter 110: var_f=8.85 len_f=6.15, nlml=443.07
iter 120: var_f=9.22 len_f=6.22, nlml=442.91
iter 130: var_f=9.58 len_f=6.29, nlml=442.76
iter 140: var_f=9.93 len_f=6.36, nlml=442.62
iter 150: var_f=10.27 len_f=6.43, nlml=442.49
iter 160: var_f=10.60 len_f=6.49, nlml=442.38
iter 170: var_f=10.92 len_f=6.55, nlml=442.27
iter 180: var_f=11.24 len_f=6.61, nlml=442.18
iter 190: var_f=11.55 len_f=6.66, nlml=442.08
optimisation time: 18.88 secs
###Markdown
Make predictions
###Code
x_plot = np.linspace(np.min(x)-10.0, np.max(x)+10.0, num=500)
print('calculating the posterior predictive distribution ...')
t0 = time.time()
nlpd = model.negative_log_predictive_density(t=x_test, y=y_test)
posterior_mean, posterior_cov = model.predict(t=x_plot)
t1 = time.time()
print('prediction time: %2.2f secs' % (t1-t0))
print('test NLPD: %1.2f' % nlpd)
###Output
calculating the posterior predictive distribution ...
prediction time: 2.26 secs
test NLPD: 0.44
###Markdown
Sample from the posterior distribution
###Code
print('sampling from the posterior ...')
t0 = time.time()
posterior_samp = model.posterior_sample(20, t=x_plot)
t1 = time.time()
print('sampling time: %2.2f secs' % (t1-t0))
###Output
sampling from the posterior ...
sampling time: 4.96 secs
###Markdown
Plot the posterior
###Code
lb = posterior_mean - 1.96 * posterior_cov ** 0.5
ub = posterior_mean + 1.96 * posterior_cov ** 0.5
link_fn = model.likelihood.link_fn
print('plotting ...')
plt.figure(2, figsize=(12, 5))
plt.clf()
plt.plot(x, y, 'b+', label='training observations')
plt.plot(x_test, y_test, 'r+', alpha=0.4, label='test observations')
plt.plot(x_plot, link_fn(posterior_mean), 'm', label='posterior mean')
plt.fill_between(x_plot, link_fn(lb), link_fn(ub), color='m', alpha=0.05, label='95% confidence')
plt.plot(x_plot, link_fn(posterior_samp), 'm', alpha=0.15)
plt.xlim(x_plot[0], x_plot[-1])
plt.legend()
plt.title('GP classification via Kalman smoothing. Test NLPD: %1.2f' % nlpd)
plt.xlabel('time - $t$');
###Output
plotting ...
|
hw1_starter_template.ipynb | ###Markdown
Part A. ANOVAAdditional Material: ANOVA tutorialhttps://datascienceplus.com/one-way-anova-in-r/Jet lag is a common problem for people traveling across multiple time zones, but people can gradually adjust to the new time zone since the exposure of the shifted light schedule to their eyes can resets the internal circadian rhythm in a process called “phase shift”. Campbell and Murphy (1998) in a highly controversial study reported that the human circadian clock can also be reset by only exposing the back of the knee to light, with some hailing this as a major discovery and others challenging aspects of the experimental design. The table below is taken from a later experiment by Wright and Czeisler (2002) that re-examined the phenomenon. The new experiment measured circadian rhythm through the daily cycle of melatonin production in 22 subjects randomly assigned to one of three light treatments. Subjects were woken from sleep and for three hours were exposed to bright lights applied to the eyes only, to the knees only or to neither (control group). The effects of treatment to the circadian rhythm were measured two days later by the magnitude of phase shift (measured in hours) in each subject’s daily cycle of melatonin production. A negative measurement indicates a delay in melatonin production, a predicted effect of light treatment, while a positive number indicates an advance.Raw data of phase shift, in hours, for the circadian rhythm experiment|Treatment|Phase Shift (hr) ||:--------|:-------------------------------------------||Control |0.53, 0.36, 0.20, -0.37, -0.60, -0.64, -0.68, -1.27||Knees |0.73, 0.31, 0.03, -0.29, -0.56, -0.96, -1.61 ||Eyes |-0.78, -0.86, -1.35, -1.48, -1.52, -2.04, -2.83 | Question A1 - 3 ptsConsider the following incomplete R output:|Source|Df |Sum of Squares|Mean Squares|F-statistics|p-value||:----:|:-:|:------------:|:----------:|:----------:|:-----:||Treatments|?|?|3.6122|?|0.004||Error|?|9.415|?| | ||TOTAL|?|?| | | |Fill in the missing values in the analysis of the variance table. Question A2 - 3 ptsUse $\mu_1$, $\mu_2$, and $\mu_3$ as notation for the three mean parameters and define these parameters clearly based on the context of the topic above. Find the estimates of these parameters. Question A3 - 5 ptsUse the ANOVA table in Question A1 to answer the following questions:a. **1 pts** Write the null hypothesis of the ANOVA $F$-test, $H_0$b. **1 pts** Write the alternative hypothesis of the ANOVA $F$-test, $H_A$c. **1 pts** Fill in the blanks for the degrees of freedom of the ANOVA $F$-test statistic: $F(____, _____)$d. **1 pts** What is the p-value of the ANOVA $F$-test?e. **1 pts** According the the results of the ANOVA $F$-test, does light treatment affect phase shift? Use an $\alpha$-level of 0.05. Part B. Simple Linear RegressionWe are going to use regression analysis to estimate the performance of CPUs based on the maximum number of channels in the CPU. This data set comes from the UCI Machine Learning Repository.The data file includes the following columns:* *vendor*: vendor of the CPU* *chmax*: maximum channels in the CPU* *performance*: published relative performance of the CPUThe data is in the file "machine.csv". To read the data in `R`, save the file in your working directory (make sure you have changed the directory if different from the R working directory) and read the data using the `R` function `read.csv()`.
###Code
# Read in the data
data = read.csv("machine.csv", head = TRUE, sep = ",")
# Show the first few rows of data
head(data, 3)
###Output
_____no_output_____
###Markdown
Question B1: Exploratory Data Analysis - 9 ptsa. **3 pts** Use a scatter plot to describe the relationship between CPU performance and the maximum number of channels. Describe the general trend (direction and form). Include plots and R-code used.
###Code
# Your code here...
###Output
_____no_output_____
###Markdown
b. **3 pts** What is the value of the correlation coefficient between _performance_ and _chmax_? Please interpret the strength of the correlation based on the correlation coefficient.
###Code
# Your code here...
###Output
_____no_output_____
###Markdown
c. **2 pts** Based on this exploratory analysis, would you recommend a simple linear regression model for the relationship?d. **1 pts** Based on the analysis above, would you pursue a transformation of the data? *Do not transform the data.* Question B2: Fitting the Simple Linear Regression Model - 11 ptsFit a linear regression model, named *model1*, to evaluate the relationship between performance and the maximum number of channels. *Do not transform the data.* The function you should use in R is:
###Code
# Your code here...
model1 = lm(performance ~ chmax, data)
###Output
_____no_output_____
###Markdown
a. **3 pts** What are the model parameters and what are their estimates? b. **2 pts** Write down the estimated simple linear regression equation.c. **2 pts** Interpret the estimated value of the $\beta_1$ parameter in the context of the problem.d. **2 pts** Find a 95% confidence interval for the $\beta_1$ parameter. Is $\beta_1$ statistically significant at this level?e. **2 pts** Is $\beta_1$ statistically significantly positive at an $\alpha$-level of 0.01? What is the approximate p-value of this test? Question B3: Checking the Assumptions of the Model - 8 ptsCreate and interpret the following graphs with respect to the assumptions of the linear regression model. In other words, comment on whether there are any apparent departures from the assumptions of the linear regression model. Make sure that you state the model assumptions and assess each one. Each graph may be used to assess one or more model assumptions.a. **2 pts** Scatterplot of the data with *chmax* on the x-axis and *performance* on the y-axis
###Code
# Your code here...
###Output
_____no_output_____
###Markdown
**Model Assumption(s) it checks:****Interpretation:**b. **3 pts** Residual plot - a plot of the residuals, $\hat\epsilon_i$, versus the fitted values, $\hat{y}_i$
###Code
# Your code here...
###Output
_____no_output_____
###Markdown
**Model Assumption(s) it checks:****Interpretation:**c. **3 pts** Histogram and q-q plot of the residuals
###Code
# Your code here...
###Output
_____no_output_____
###Markdown
**Model Assumption(s) it checks:****Interpretation:** Question B4: Improving the Fit - 10 ptsa. **2 pts** Use a Box-Cox transformation (`boxCox()`) to find the optimal $\lambda$ value rounded to the nearest half integer. What transformation of the response, if any, does it suggest to perform?
###Code
# Your code here...
###Output
_____no_output_____
###Markdown
b. **2 pts** Create a linear regression model, named *model2*, that uses the log transformed *performance* as the response, and the log transformed *chmax* as the predictor. Note: The variable *chmax* has a couple of zero values which will cause problems when taking the natural log. Please add one to the predictor before taking the natural log of it
###Code
# Your code here...
###Output
_____no_output_____
###Markdown
e. **2 pts** Compare the R-squared values of *model1* and *model2*. Did the transformation improve the explanatory power of the model?c. **4 pts** Similar to Question B3, assess and interpret all model assumptions of *model2*. A model is considered a good fit if all assumptions hold. Based on your interpretation of the model assumptions, is *model2* a good fit?
###Code
# Your code here...
###Output
_____no_output_____
###Markdown
Question B5: Prediction - 3 ptsSuppose we are interested in predicting CPU performance when `chmax = 128`. Please make a prediction using both *model1* and *model2* and provide the 95% prediction interval of each prediction on the original scale of the response, *performance*. What observations can you make about the result in the context of the problem?
###Code
# Your code here...
###Output
_____no_output_____
###Markdown
Part C. ANOVA - 8 ptsWe are going to continue using the CPU data set to analyse various vendors in the data set. There are over 20 vendors in the data set. To simplify the task, we are going to limit our analysis to three vendors, specifically, honeywell, hp, and nas. The code to filter for those vendors is provided below.
###Code
# Filter for honeywell, hp, and nas
data2 = data[data$vendor %in% c("honeywell", "hp", "nas"), ]
data2$vendor = factor(data2$vendor)
###Output
_____no_output_____
###Markdown
1. **2 pts** Using `data2`, create a boxplot of *performance* and *vendor*, with *performance* on the vertical axis. Interpret the plots.
###Code
# Your code here...
###Output
_____no_output_____
###Markdown
2. **3 pts** Perform an ANOVA F-test on the means of the three vendors. Using an $\alpha$-level of 0.05, can we reject the null hypothesis that the means of the three vendors are equal? Please interpret.
###Code
# Your code here...
###Output
_____no_output_____
###Markdown
3. **3 pts** Perform a Tukey pairwise comparison between the three vendors. Using an $\alpha$-level of 0.05, which means are statistically significantly different from each other?
###Code
# Your code here...
###Output
_____no_output_____ |
MAE6226/Lesson_10.ipynb | ###Markdown
Now we want to create the free streamNote: we use u_inf = 1 in most cases, but we could use any number. However, because we are using the assumptions of potential flow, we won't asee any separation or any real changes in the streamlines. We may see differences in the Cp values if we don't change the Cp limits in the plot. The things that will be different are the strengths of the source sheets on the panels, as they will need to increase if the freestream velocity increases.
###Code
# create the freestream
class Freestream:
def __init__(self, u_inf = 1.0, alpha = 0.0):
self.u_inf = u_inf
self.alpha = numpy.radians(alpha)
u_inf = 1.0 # freestream velocity
alpha = 0.0 # angle of attack
freestream = Freestream(u_inf, alpha) # create the object freestream
# create the flow tangency boundary condition where normal velocity = 0
def integral(x, y, panel, dxdz, dydz):
def integrand(s):
return (((x - (panel.xa - math.sin(panel.beta) * s)) * dxdz +
(y - (panel.ya + math.cos(panel.beta) * s)) * dydz) /
((x - (panel.xa - math.sin(panel.beta) * s))**2 +
(y - (panel.ya + math.cos(panel.beta) * s))**2) )
return integrate.quad(integrand, 0.0, panel.length)[0]
# solve the linear system
def build_matrix(panels):
N = len(panels)
A = numpy.empty((N,N), dtype = float)
numpy.fill_diagonal(A, 0.5)
for i, p_i in enumerate(panels):
for j, p_j in enumerate(panels):
if i != j:
A[i,j] = 0.5 / math.pi*integral(p_i.xc, p_i.yc, p_j, math.cos(p_i.beta), math.sin(p_i.beta))
return A
def build_rhs(panels, freestream):
b = numpy.empty(len(panels), dtype = float)
for i,panel in enumerate(panels):
b[i] = -freestream.u_inf*math.cos(freestream.alpha - panel.beta)
return b
# compute the matrices
A = build_matrix(panels)
b = build_rhs(panels, freestream)
# solve the linear system
sigma = numpy.linalg.solve(A,b)
for i, panel in enumerate(panels):
panel.sigma = sigma[i]
# find the surface pressure coeff
def get_tangential_velocity(panels, freestream):
N = len(panels)
A = numpy.empty((N,N), dtype = float)
numpy.fill_diagonal(A, 0.0)
for i, p_i in enumerate(panels):
for j, p_j in enumerate(panels):
if i!=j:
A[i,j] = 0.5 / math.pi*integral(p_i.xc, p_i.yc, p_j, -math.sin(p_i.beta), math.cos(p_i.beta))
b = freestream.u_inf*numpy.sin([freestream.alpha - panel.beta for panel in panels])
sigma = numpy.array([panel.sigma for panel in panels])
vt = numpy.dot(A,sigma) + b
for i, panel in enumerate(panels):
panel.vt = vt[i]
# compute the tangential velocity
get_tangential_velocity(panels, freestream)
# get the pressure coeff
def get_pressure_coefficient(panels, freestream):
for panel in panels:
panel.cp = 1.0 - (panel.vt / freestream.u_inf)**2
# compute the surface pressure coefficients
get_pressure_coefficient(panels, freestream)
# create the theoretical solution to compare to
voverVsquared=numpy.array([0.0, 0.64, 1.01, 1.241, 1.378, 1.402, 1.411, 1.411,
1.399, 1.378, 1.35, 1.288, 1.228, 1.166, 1.109, 1.044,
0.956, 0.906, 0.0])
#print(voverVsquared)
xtheo=numpy.array([0.0, 0.5, 1.25, 2.5, 5.0, 7.5, 10.0, 15.0, 20.0, 25.0, 30.0,
40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 95.0, 100.0])
xtheo /= 100
#print(xtheo)
# plot it
pyplot.figure(figsize=(10, 6))
pyplot.grid()
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('$C_p$', fontsize=16)
pyplot.plot([panel.xc for panel in panels if panel.loc == 'upper'],
[panel.cp for panel in panels if panel.loc == 'upper'],
label='upper',
color='r', linewidth=1, marker='x', markersize=8)
pyplot.plot([panel.xc for panel in panels if panel.loc == 'lower'],
[panel.cp for panel in panels if panel.loc == 'lower'],
label='lower',
color='b', linewidth=0, marker='d', markersize=6)
pyplot.plot(xtheo, 1-voverVsquared,
label='theoretical',
color='k', linestyle='--',linewidth=2)
pyplot.legend(loc='best', prop={'size':14})
pyplot.xlim(-0.1, 1.1)
pyplot.ylim(1.0, -0.6)
pyplot.title('Number of panels : {}'.format(N));
###Output
_____no_output_____
###Markdown
Here we see that all of the Cp points on the top and bottom of the airfoil surface are the same. This makes sense as the airfoil is symmetric and is at 0 deg angle of attack, so the flow should be symmetric about the top and bottom of the airfoil. For the object to be a closed surface we need to check that the sum of all of the source strengths is = 0.
###Code
# find the accuracy / is the surface closed
accuracy = sum([panel.sigma*panel.length for panel in panels])
print(' --> sum of the source / sink strengths: {}'.format(accuracy))
# create the streamlines using the velocity in the u and v directions
def get_velocity_field(panels, freestream, X, Y):
u = freestream.u_inf*math.cos(freestream.alpha)*numpy.ones_like(X,dtype=float)
v = freestream.u_inf*math.sin(freestream.alpha)*numpy.ones_like(X, dtype=float)
vec_integral = numpy.vectorize(integral)
for panel in panels:
u = u + panel.sigma / (2.0*math.pi)*vec_integral(X,Y,panel, 1.0, 0.0)
v = v + panel.sigma / (2.0*math.pi)*vec_integral(X,Y,panel, 0.0, 1.0)
return u, v
# create the mesh grid
nx, ny = 20,20
x_start, x_end = -1.0, 2.0
y_start, y_end = -0.3, 0.3
X, Y = numpy.meshgrid(numpy.linspace(x_start, x_end, nx), \
numpy.linspace(y_start, y_end, ny))
# Compute the velocity feild
u, v = get_velocity_field(panels, freestream, X, Y)
# plot it
width = 10
pyplot.figure(figsize=(width, width))
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
pyplot.streamplot(X, Y, u, v,
density=1, linewidth=1, arrowsize=1, arrowstyle='->')
pyplot.fill([panel.xc for panel in panels],
[panel.yc for panel in panels],
color='k', linestyle='solid', linewidth=2, zorder=2)
pyplot.axis('scaled', adjustable='box')
pyplot.xlim(x_start, x_end)
pyplot.ylim(y_start, y_end)
pyplot.title('Streamlines around a NACA 0012 airfoil (AoA = ${}^o$)'.format(alpha),
fontsize=16);
# compute the pressure feild
cp = 1.0 - (u**2+v**2) / freestream.u_inf**2
# plot it
width = 10
pyplot.figure(figsize=(width, width))
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
contf = pyplot.contourf(X, Y, cp,
levels=numpy.linspace(-2.0, 1.0, 100), extend='both')
cbar = pyplot.colorbar(contf,
orientation='horizontal',
shrink=0.5, pad = 0.1,
ticks=[-2.0, -1.0, 0.0, 1.0])
cbar.set_label('$C_p$', fontsize=16)
pyplot.fill([panel.xc for panel in panels],
[panel.yc for panel in panels],
color='k', linestyle='solid', linewidth=2, zorder=2)
pyplot.axis('scaled', adjustable='box')
pyplot.xlim(x_start, x_end)
pyplot.ylim(y_start, y_end)
pyplot.title('Contour of pressure field', fontsize=16);
###Output
_____no_output_____ |
month06/DATASCIENCE/DS_day07_student/code/Airline_customer_analysis_code/code/.ipynb_checkpoints/k-means_2-checkpoint.ipynb | ###Markdown
航空公司客户价值分析 实验目的:借助航空公司客户数据,对客户进行分类。对不同的客户类别进行特征分析,比较不同类别客户的客户价值。对不同价值的客户类别提供个性化服务,制定相应的营销策略。 读取数据,指定编码为gb18030 数据描述性分析 数据预处理 1. 去除票价为空的数据 2.只保留票价不为0,平均折扣率不为0,总飞行公里数大于0的记录。 构建特征 L: LOAD_TIME 观测窗口的结束时间----FFP_DATE 入会时间R: LAST_TO_END 最后一次乘机时间至观测窗口结束时长F: FLIGHT_COUNT 观测窗口内的飞行次数 M: SEG_KM_SUM 观测窗口的总飞行公里数 C: avg_discount 平均折扣率
###Code
## 选取需求特征
## 构建L特征
# ## 合并特征
###Output
_____no_output_____ |
section4/.ipynb_checkpoints/Section7_Lecture58-checkpoint.ipynb | ###Markdown
Grouping and Summarizing Data by Categories
###Code
import pandas as pd
uni=pd.read_csv("cwurData.csv")
uni.head(5)
groupdf=uni.groupby('country') #group the data according to country
groupdf.describe() #all stats
g1 = uni.groupby( [ "country", "influence"] ).count() #count statistics on basis of country and influence
x=pd.DataFrame(groupdf.size().reset_index(name = "InfluenceAggregate")) #aggregate country wise influence
x.head(7)
r=pd.read_csv("rainfall1901_2015.csv")
r.head(6)
m1=r.groupby(['SUBDIVISION']).mean() #average rainfall for the different sub-divisions in India
m1.head(7)
m2=r.groupby(['SUBDIVISION','ANNUAL']).mean() #average ANNUAL rainfall for the different sub-divisions in India
m2.head(10)
m3=r.groupby(['SUBDIVISION','ANNUAL','YEAR']).mean()
m3.head(6)
###Output
_____no_output_____
###Markdown
all statistics
###Code
m4=r.groupby(['SUBDIVISION','ANNUAL','YEAR']).describe()
m4.head(6)
g1 = r.groupby( [ "SUBDIVISION", "YEAR","ANNUAL"] )
g1.describe()
print(type(g1))
#convert grouped data to dataframe
g2=r.groupby( [ "SUBDIVISION", "YEAR","ANNUAL"] ).size().to_frame(name = 'count').reset_index()
g2.head(11)
###Output
_____no_output_____ |
Project Files/ML Files/Pegasos Quantum Support Vector Classifier.ipynb | ###Markdown
**Pegasos Quantum Support Vector Classifier**There's another SVM based algorithm that benefits from the quantum kernel method. Here, we introduce an implementation of a another classification algorithm, which is an alternative version to the QSVC shown above. This classification algorithm implements the Pegasos algorithm from the paper "Pegasos: Primal Estimated sub-GrAdient SOlver for SVM" by Shalev-Shwartz et al., see: https://home.ttic.edu/~nati/Publications/PegasosMPB.pdf.This algorithm is an alternative to the dual optimization from the `scikit-learn` package, beneficial of the kernel trick, and yields a training complexity that is independent of the size of the training set. Thus, the `PegasosQSVC` is expected to train faster than QSVC for sufficiently large training sets.The algorithm can be used as direct replacement of `QSVC` with some hyper-parameterization. **Let's generate some data:**
###Code
from sklearn.datasets import make_blobs
# example dataset
features, labels = make_blobs(n_samples=20, n_features=2, centers=2, random_state=3, shuffle=True)
###Output
_____no_output_____
###Markdown
We pre-process the data to ensure compatibility with the rotation encoding and split it into the training and test datasets.
###Code
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
features = MinMaxScaler(feature_range=(0, np.pi)).fit_transform(features)
train_features, test_features, train_labels, test_labels = train_test_split(
features, labels, train_size=15, shuffle=False
)
###Output
_____no_output_____
###Markdown
We have two features in the dataset, so we set a number of qubits to the number of features in the dataset.Then we set $\tau$ to the number of steps performed during the training procedure. Please note that, there is no early stopping criterion in the algorithm. The algorithm iterates over all $\tau$ steps.And the last one is the hyperparameter $C$. This is a positive regularization parameter. The strength of the regularization is inversely proportional to $C$. Smaller $C$ induce smaller weights which generally helps preventing overfitting. However, due to the nature of this algorithm, some of the computation steps become trivial for larger $C$. Thus, larger $C$ improve the performance of the algorithm drastically. If the data is linearly separable in feature space, $C$ should be chosen to be large. If the separation is not perfect, $C$ should be chosen smaller to prevent overfitting.
###Code
# number of qubits is equal to the number of features
num_qubits = 2
# number of steps performed during the training procedure
tau = 100
# regularization parameter
C = 1000
###Output
_____no_output_____
###Markdown
The algorithm will run using:- A statevector simulator- A quantum kernel created from `ZFeatureMap`
###Code
from qiskit import BasicAer
from qiskit.circuit.library import ZFeatureMap
from qiskit.utils import QuantumInstance, algorithm_globals
from qiskit_machine_learning.kernels import QuantumKernel
algorithm_globals.random_seed = 12345
pegasos_backend = QuantumInstance(
BasicAer.get_backend("statevector_simulator"),
seed_simulator=algorithm_globals.random_seed,
seed_transpiler=algorithm_globals.random_seed,
)
feature_map = ZFeatureMap(feature_dimension=num_qubits, reps=1)
qkernel = QuantumKernel(feature_map=feature_map, quantum_instance=pegasos_backend)
###Output
_____no_output_____
###Markdown
The implementation `PegasosQSVC` is compatible with the `scikit-learn` interfaces and has a pretty standard way of training a model. In the constructor we pass parameters of the algorithm, in this case there are a regularization hyper-parameter $C$ and a number of steps.Then we pass training features and labels to the `fit` method, which trains a models and returns a fitted classifier.Afterwards, we score our model using test features and labels.
###Code
from qiskit_machine_learning.algorithms import PegasosQSVC
pegasos_qsvc = PegasosQSVC(quantum_kernel=qkernel, C=C, num_steps=tau)
# training
pegasos_qsvc.fit(train_features, train_labels)
# testing
pegasos_score = pegasos_qsvc.score(test_features, test_labels)
print(f"PegasosQSVC classification test score: {pegasos_score}")
###Output
PegasosQSVC classification test score: 1.0
###Markdown
For visualization purposes we create a mesh grid of a predefined step that spans our minimum and maximum values we applied in MinMaxScaler. We also add some margin to the grid for better representation of the training and test samples.
###Code
grid_step = 0.2
margin = 0.2
grid_x, grid_y = np.meshgrid(
np.arange(-margin, np.pi + margin, grid_step), np.arange(-margin, np.pi + margin, grid_step)
)
###Output
_____no_output_____
###Markdown
We convert the grid to the shape compatible with the model, the shape should be `(n_samples, n_features)`.Then for each grid point we predict a label. In our case predicted labels will be used for coloring the grid.
###Code
meshgrid_features = np.column_stack((grid_x.ravel(), grid_y.ravel()))
meshgrid_colors = pegasos_qsvc.predict(meshgrid_features)
###Output
_____no_output_____
###Markdown
Finally, we plot our grid according to the labels/colors we obtained from the model. We also plot training and test samples.
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(5, 5))
meshgrid_colors = meshgrid_colors.reshape(grid_x.shape)
plt.pcolormesh(grid_x, grid_y, meshgrid_colors, cmap="RdBu", shading="auto")
plt.scatter(
train_features[:, 0][train_labels == 0],
train_features[:, 1][train_labels == 0],
marker="s",
facecolors="w",
edgecolors="r",
label="A train",
)
plt.scatter(
train_features[:, 0][train_labels == 1],
train_features[:, 1][train_labels == 1],
marker="o",
facecolors="w",
edgecolors="b",
label="B train",
)
plt.scatter(
test_features[:, 0][test_labels == 0],
test_features[:, 1][test_labels == 0],
marker="s",
facecolors="r",
edgecolors="r",
label="A test",
)
plt.scatter(
test_features[:, 0][test_labels == 1],
test_features[:, 1][test_labels == 1],
marker="o",
facecolors="b",
edgecolors="b",
label="B test",
)
plt.legend(bbox_to_anchor=(1.05, 1), loc="upper left", borderaxespad=0.0)
plt.title("Pegasos Classification")
plt.show()
import qiskit.tools.jupyter
%qiskit_version_table
###Output
_____no_output_____ |
analyses/seasonality_paper_st/comparisons/experiments_variables_table.ipynb | ###Markdown
Setup
###Code
from specific import *
###Output
_____no_output_____
###Markdown
Load the dataframes for all experiments
###Code
experiment_data = load_experiment_data(
list(experiment_name_dict),
which="data_split",
ignore=("X_train", "y_train", "y_test"),
)
###Output
_____no_output_____
###Markdown
Parse the dataframes to retrieve the variables used for each experiment
###Code
contents = {}
for exp in sort_experiments(experiment_data):
contents[exp] = shorten_features(
repl_fill_names(sort_features(experiment_data[exp]["X_test"].columns))
)
unique_vars = sort_features(
[var for var in contents["all"] if not re.search("\s.?(\d+)M", var)]
)
###Output
_____no_output_____
###Markdown
Build the matrix representing which variables are present with which lags for each of the experiments
###Code
condensed = {}
for exp, exp_vars in contents.items():
for var in unique_vars:
# Find for which lags the current variable is present (if any).
lags = [f"{get_lag(v)}M".replace("0M", "C") for v in exp_vars if var in v]
if all(
lag in lags for lag in ["C", "1M", "3M", "6M", "9M", "12M", "18M", "24M"]
):
lags = "C & all A"
else:
lags = ", ".join(lags)
condensed[(experiment_name_dict[exp], var)] = lags
df = (
pd.Series(condensed)
.unstack()
.reindex(index=[experiment_name_dict[c] for c in contents], columns=unique_vars)
)
df
print(df.to_latex())
###Output
_____no_output_____ |
examples/rl/rl_2sources.ipynb | ###Markdown
Run simulation
###Code
events = [{'p':1000.0},
{'t_end':0.1,'v_1':102},
{'t_end':1.0,'v_1':102},
{'t_end':2.0,'v_1':100}]
syst.simulate(events);
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(7, 7))
axes[0].plot(syst.T,syst.get_values('i'))
axes[1].plot(syst.T,syst.get_values('p'))
for ax in axes:
ax.grid()
#ax.legend()
ax.set_xlabel('Time (s)')
###Output
_____no_output_____
###Markdown
Run interactive simulation
###Code
Δt = 0.01
times = np.arange(0.0,2,Δt)
syst.initialize()
it = 0
for t in times:
v_1 = 101 + np.sin(2*np.pi*1.0*t)
events=[{'t_end':t,'v_1':v_1}]
syst.run(events)
it += 1
syst.post();
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(7, 7))
axes[0].plot(syst.T,syst.get_values('i'))
axes[1].plot(syst.T,syst.get_values('v_1'))
for ax in axes:
ax.grid()
#ax.legend()
ax.set_xlabel('Time (s)')
syst.Dt
###Output
_____no_output_____
###Markdown
Run simulation with feedback control (proportional control)
###Code
Δt = 0.1
times = np.arange(0.0,2,Δt)
syst.initialize()
K_p = 2e-3
K_i = 0.01
syst.xi = 0.0
it = 0
for t in times:
p = syst.get_value('p')
p_ref = 1000
if t>0.1:
p_ref = 1200
error = (p_ref - p)
v_1 = 101 + K_p*error + K_i*syst.xi
syst.xi += Δt*error
events=[{'t_end':t,'v_1':v_1}]
syst.run(events)
it += 1
syst.post();
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(7, 7))
axes[0].plot(syst.T,syst.get_values('p'))
axes[1].plot(syst.T,syst.get_values('v_1'))
for ax in axes:
ax.grid()
#ax.legend()
ax.set_xlabel('Time (s)')
T_ctrl = np.zeros((len(times),1))
X_ctrl = np.zeros((len(times),N_u_d*2))
U_ctrl = np.zeros((len(times),2))
X_obs = np.zeros((len(times),N_x_d))
Z_obs = np.zeros((len(times),N_z_o))
z_ref = np.copy(np.array([[p_t_ref],
[v_1_ref]]))
u_d = control(syst,z_ref)
p_m_ref = u_d[0,0]
v_f = u_d[1,0]
###Output
_____no_output_____
###Markdown
Run simulation with feedback control (PI control)
###Code
Δt = 0.1
times = np.arange(0.0,2,Δt)
syst.initialize()
K_p = 2e-3
K_i = 0.01
syst.xi = 0.0
it = 0
for t in times:
p = syst.get_value('p')
p_ref = 1000
if t>0.1:
p_ref = 1200
error = (p_ref - p)
v_1 = 101 + K_p*error + K_i*syst.xi
syst.xi += Δt*error
events=[{'t_end':t,'v_1':v_1}]
syst.run(events)
it += 1
syst.post();
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(7, 7))
axes[0].plot(syst.T,syst.get_values('p'))
axes[1].plot(syst.T,syst.get_values('v_1'))
for ax in axes:
ax.grid()
#ax.legend()
ax.set_xlabel('Time (s)')
###Output
_____no_output_____
###Markdown
PI control design
###Code
import pydae.ssa as ssa
import scipy.signal as sctrl
###Output
_____no_output_____
###Markdown
Sistema DAE linealizado\begin{eqnarray}\Delta \dot x_p &=& A_p \Delta x_p + B_p \Delta u_p \\\Delta z_p &=& C_p \Delta x_p + D_p \Delta u_p \\\end{eqnarray} Discretized system\begin{eqnarray}\Delta x_d^{k+1} &=& A_d \Delta x_d^{k} + B_d \Delta u_d^{k} \\\Delta z_d^{k} &=& C_d^{k} \Delta x_d^{k} + D_d^{k} \Delta u_d^{k} \\\end{eqnarray} Dynamic extension\begin{eqnarray}\Delta x_i^{k+1} &=& \Delta x_i^{k} + \Delta t \left(\Delta z_c- \Delta z_c^\star \right) \\\end{eqnarray}with: \begin{eqnarray}\Delta z_c &=& z_c - z_c^0 \\\Delta z_c^\star &=& z_c^\star -z_c^0 \end{eqnarray} Extended system\begin{equation}\Delta x_e^{k+1} = \begin{bmatrix}A_d & 0\\\Delta t C_c & I\end{bmatrix}\begin{bmatrix}\Delta x_d^{k}\\\Delta x_i^{k}\end{bmatrix}+\begin{bmatrix}B_d \\0\end{bmatrix}\Delta u_d^{k} \end{equation} \begin{equation}A_e = \begin{bmatrix}A_d & 0\\\Delta t C_c & I\end{bmatrix}\;\;\;\;\;B_e = \begin{bmatrix}B_d \\0\end{bmatrix}\end{equation} State feedback control\begin{equation}\Delta u_d^{k} = -K_e \Delta x_e^{k}\end{equation}$K_e$ can be obtained with LQR:K_e = lqr(A_e,B_e,Q,R)
###Code
Δt = 0.1
x_d_ctrl_list = ['i'] # states to consider in the reduction
z_ctrl_list = ['p'] # outputs to consider in the controller
z_ctrl_idxs = [syst.outputs_list.index(item) for item in z_ctrl_list]
syst.Δt = Δt
## Calculate equilibirum point
syst.initialize([{'p':1e3}])
ssa.eval_ss(syst)
# linear continous plant
A_p = syst.A
B_p = syst.B
C_p = syst.C
D_p = syst.D
# plant discretization
A_d,B_d,C_d,D_d,Dt = sctrl.cont2discrete((A_p,B_p,C_p,D_p),Δt,method='zoh')
N_z_d,N_x_d = C_d.shape # discreticed plant dimensions
N_x_d,N_u_d = B_d.shape
# convenient matrices
O_ux = np.zeros((N_u_d,N_x_d))
O_xu = np.zeros((N_x_d,N_u_d))
O_uu = np.zeros((N_u_d,N_u_d))
I_uu = np.eye(N_u_d)
syst.A_d = A_d
syst.B_d = B_d
# Controller ##################################################################################
C_c = C_d[z_ctrl_idxs,:]
D_c = D_d[z_ctrl_idxs,:]
N_z_c,N_x_c = C_c.shape
O_ux = np.zeros((N_u_d,N_x_d))
O_xu = np.zeros((N_x_d,N_u_d))
O_uu = np.zeros((N_u_d,N_u_d))
I_uu = np.eye(N_u_d)
# discretized plant:
# Δx_d = A_d*Δx_d + B_d*Δu_d
# Δz_c = C_c*Δx_d + D_c*Δu_d
# dinamic extension:
# Δx_d = A_d*Δx_d + B_d*Δu_d
# Δx_i = Δx_i + Δt*(Δz_c-Δz_c_ref) = Δx_i + Δt*C_c*Δx_d - Dt*Δz_c_ref
# Δz_c = z_c - z_c_0
# Δz_c_ref = z_c_ref - z_c_0
# (Δz_c-Δz_c_ref) = z_c - z_c_ref
A_e = np.block([
[ A_d, O_xu], # Δx_d
[ Δt*C_c, I_uu], # Δx_i
])
B_e = np.block([
[ B_d],
[ O_uu],
])
# weighting matrices
Q_c = np.eye(A_e.shape[0])
R_c = np.diag([0.1])
K_c,S_c,E_c = ssa.dlqr(A_e,B_e,Q_c,R_c)
E_c = np.log(E_c)/Δt
E_c
###Output
_____no_output_____
###Markdown
PI control validation
###Code
times = np.arange(0.0,2,Δt)
syst.initialize()
i_0 = syst.get_value('i')
K_p = K_c[0,0]
K_i = K_c[0,1]
syst.Δxi = 0.0
u_d_0 = syst.get_value('v_1')
p_0 = syst.get_value('p')
it = 0
for t in times:
p = syst.get_value('p')
i = syst.get_value('i')
Δx_d = i - i_0
Δx_i = syst.Δxi
p_ref = p_0
if t>0.1:
p_ref = 1200
error = (p - p_ref)
Δx_e = np.block([Δx_d, Δx_i]).T
Δu_d = -K_c @ Δx_e
u_d = Δu_d + u_d_0
syst.Δxi += Δt*error
v_1 = u_d
events=[{'t_end':t,'v_1':v_1}]
syst.run(events)
it += 1
syst.post();
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(7, 7))
axes[0].plot(syst.T,syst.get_values('p'))
axes[1].plot(syst.T,syst.get_values('v_1'))
for ax in axes:
ax.grid()
#ax.legend()
ax.set_xlabel('Time (s)')
syst.initialize()
K_p = 2e-3
it = 0
for t in times:
p = syst.get_value('p')
p_ref = 1000
if t>0.1:
p_ref = 1200
v_1 = 101 + K_p*(p_ref - p)
events=[{'t_end':t,'v_1':v_1}]
syst.run(events)
it += 1
syst.post();
###Output
_____no_output_____
###Markdown
PI control with delay design
###Code
import pydae.ssa as ssa
import scipy.signal as sctrl
###Output
_____no_output_____
###Markdown
Linealized DAE system\begin{eqnarray}\Delta \dot x_p &=& A_p \Delta x_p + B_p \Delta u_p \\\Delta z_p &=& C_p \Delta x_p + D_p \Delta u_p \\\end{eqnarray} Discretized system\begin{eqnarray}\Delta x_d^{k+1} &=& A_d \Delta x_d^{k} + B_d \Delta u_d^{k} \\\Delta z_d^{k} &=& C_d \Delta x_d^{k} + D_d \Delta u_d^{k} \\\end{eqnarray} Discretized system from control point of view\begin{eqnarray}\Delta x_d^{k+1} &=& A_d \Delta x_d^{k} + B_d \Delta x_r^{k} \\\Delta z_c^{k} &=& C_c \Delta x_d^{k} + D_c \Delta x_r^{k} \\\end{eqnarray} Delay extension\begin{eqnarray}\Delta x_r^{k+1} &=& \Delta u_r^{k} \\\end{eqnarray} Dynamic extension\begin{eqnarray}\Delta x_i^{k+1} &=& \Delta x_i^{k} + \Delta t \left(\Delta z_c- \Delta z_c^\star \right) \\\end{eqnarray}with: \begin{eqnarray}\Delta z_c &=& z_c - z_c^0 \\\Delta z_c^\star &=& z_c^\star -z_c^0 \\\Delta z_c &=& C_c \Delta x_d^{k} + D_c \Delta u_d^{k} \end{eqnarray} Extended system\begin{eqnarray}\Delta x_d^{k+1} &=& A_d \Delta x_d^{k} + B_d \Delta x_r^{k} \\\Delta x_r^{k+1} &=& \Delta u_r^{k} \\\Delta x_i^{k+1} &=& \Delta x_i^{k} + \Delta t C_c \Delta x_d^k - \Delta t \Delta z_c^\star \\\end{eqnarray}\begin{equation}\Delta x_e^{k+1} = \begin{bmatrix}A_d & B_d & 0 \\0 & 0 & 0 \\\Delta t C_c& \Delta t D_c & I \end{bmatrix}\begin{bmatrix}\Delta x_d^{k}\\\Delta x_r^{k}\\\Delta x_i^{k}\end{bmatrix}+\begin{bmatrix}0\\I \\0\end{bmatrix}\Delta u_r^{k} \end{equation} \begin{equation}A_e = \begin{bmatrix}A_d & B_d & 0\\0 & 0 & 0\\\Delta t C_c & 0 & I\end{bmatrix}\;\;\;\;\;B_e = \begin{bmatrix}0 \\I \\0\end{bmatrix}\end{equation} State feedback control\begin{equation}\Delta u_r^{k} = -K_e \Delta x_e^{k}\end{equation}$K_e$ can be obtained with LQR:K_e = lqr(A_e,B_e,Q,R) Control dynamics\begin{equation}\Delta x_{ctrl}^{k+1} = \begin{bmatrix}0 & 0 \\0 & I \end{bmatrix}\begin{bmatrix}\Delta x_r^{k}\\\Delta x_i^{k}\end{bmatrix}+\begin{bmatrix}I \\0\end{bmatrix}\Delta u_d^{k} \end{equation} \begin{equation}A_{ctrl} = \begin{bmatrix} 0 & 0\\ 0 & I\end{bmatrix}\;\;\;\;\;B_{ctrl} = \begin{bmatrix}I \\0\end{bmatrix}\end{equation}
###Code
Δt = 0.01
x_d_ctrl_list = ['i'] # states to consider in the reduction
z_ctrl_list = ['p'] # outputs to consider in the controller
z_ctrl_idxs = [syst.outputs_list.index(item) for item in z_ctrl_list]
syst.Δt = Δt
## Calculate equilibirum point
syst.initialize([{'p':1e3}])
ssa.eval_ss(syst)
# linear continous plant
A_p = syst.A
B_p = syst.B
C_p = syst.C
D_p = syst.D
# plant discretization
A_d,B_d,C_d,D_d,Dt = sctrl.cont2discrete((A_p,B_p,C_p,D_p),Δt,method='zoh')
N_z_d,N_x_d = C_d.shape # discreticed plant dimensions
N_x_d,N_u_d = B_d.shape
# convenient matrices
O_ux = np.zeros((N_u_d,N_x_d))
O_xu = np.zeros((N_x_d,N_u_d))
O_uu = np.zeros((N_u_d,N_u_d))
I_uu = np.eye(N_u_d)
syst.A_d = A_d
syst.B_d = B_d
# Controller ##################################################################################
C_c = C_d[z_ctrl_idxs,:]
D_c = D_d[z_ctrl_idxs,:]
N_z_c,N_x_c = C_c.shape
O_ux = np.zeros((N_u_d,N_x_d))
O_xu = np.zeros((N_x_d,N_u_d))
O_uu = np.zeros((N_u_d,N_u_d))
I_uu = np.eye(N_u_d)
# discretized plant:
# Δx_d = A_d*Δx_d + B_d*Δu_d
# Δz_c = C_c*Δx_d + D_c*Δu_d
# dinamic extension:
# Δx_d = A_d*Δx_d + B_d*Δu_d
# Δx_i = Δx_i + Δt*(Δz_c-Δz_c_ref) = Δx_i + Δt*C_c*Δx_d - Dt*Δz_c_ref
# Δz_c = z_c - z_c_0
# Δz_c_ref = z_c_ref - z_c_0
# (Δz_c-Δz_c_ref) = z_c - z_c_ref
A_e = np.block([
[ A_d, B_d, O_xu], # Δx_d
[ O_ux, O_uu, O_uu], # Δx_r
[ Δt*C_c, Δt*D_c, I_uu], # Δx_i
])
B_e = np.block([
[ O_xu],
[ I_uu],
[ O_uu],
])
A_ctrl = A_e[N_x_d:,N_x_d:]
B_ctrl = B_e[N_x_d:]
# weighting matrices
Q_c = np.eye(A_e.shape[0])
R_c = np.diag([1])
K_c,S_c,E_c = ssa.dlqr(A_e,B_e,Q_c,R_c)
E_c = np.log(E_c)/Δt
syst.A_ctrl = A_ctrl
syst.B_ctrl = B_ctrl
syst.K_c = K_c
syst.N_x_d = N_x_d # number of plant states
syst.N_u_d = N_u_d # number of plant inputs
syst.N_z_c = N_z_c # number of plant outputs considered for the controller
K_c = np.array([[0.23551695, 1.35717384, 0.01565326]])
###Output
_____no_output_____
###Markdown
PI control validation
###Code
times = np.arange(0.0,10,Δt)
syst.initialize()
x_0 = syst.get_x()
syst.Δx_ctrl = np.zeros((N_u_d+N_z_c,1))
syst.Δu_d = np.zeros((N_u_d,1))
u_d_0 = syst.get_value('v_1')
x_r_0 = u_d_0
p_0 = syst.get_value('p')
it = 0
for t in times:
p = syst.get_value('p')
Δx_d = syst.get_x() - x_0
Δx_ctrl = np.copy(syst.Δx_ctrl)
Δx_r = Δx_ctrl[:N_u_d]
Δx_i = Δx_ctrl[N_u_d:]
Δu_d = np.copy(syst.Δu_d)
p_ref = p_0
if t>=0.1:
p_ref = 1200
Δz_c = np.block([[p - p_ref]])
Δx_e = np.block([[Δx_d], [Δx_ctrl]])
#x_r += Δu_d
#Δx_i += Δt*Δz
#Δx_ctrl = np.block([Δx_r, Δx_i]).T
# control dynamics
Δu_d = -K_c @ Δx_e
Δx_r_k1 = Δu_d
Δx_i_k1 = Δx_i + Δt*Δz_c
#u_d = Δu_d + u_d_0
syst.Δx_ctrl[:syst.N_u,:] = np.copy(Δx_r_k1)
syst.Δx_ctrl[syst.N_u:,:] = np.copy(Δx_i_k1)
syst.Δu_d = np.copy(Δu_d)
#yst.Δx_ctrl = np.copy(Δx_ctrl)
v_1 = Δx_r + x_r_0
# v_1 = u_d
events=[{'t_end':t,'v_1':v_1}]
syst.run(events)
it += 1
syst.post();
B_d
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(7, 7))
axes[0].plot(syst.T,syst.get_values('p'))
axes[1].plot(syst.T,syst.get_values('v_1'))
for ax in axes:
ax.grid()
#ax.legend()
ax.set_xlabel('Time (s)')
Δt
Δx_e = np.zeros((N_x_d+N_u_d+N_z_c,1))
syst.Δx_ctrl = np.zeros((N_u_d+N_z_c,1))
syst.Δu_d = np.zeros((N_u_d,1))
u_d_0 = syst.get_value('v_1')
x_r_0 = u_d_0
p_0 = 1
X_e = np.zeros((len(times),Δx_e.shape[0]))
it = 0
for t in times:
X_e[it,:] = Δx_e.T
p_ref = p_0
if t>=0.1:
p_ref = 1.2
C_e = np.block([[C_c , 0 , 0]])
Δz_c = np.block([[C_e@Δx_e - (p_ref-p_0)]])
B_z = np.block([[0],[0],[-Δt]])
Δu_r = -K_c @Δx_e
Δx_e = A_e@Δx_e + B_e@Δu_r + B_z@Δz_c
it += 1
syst.post();
C_e
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(7, 7))
axes[0].plot(times,X_e)
for ax in axes:
ax.grid()
#ax.legend()
ax.set_xlabel('Time (s)')
Ts_control = 0.1
Δt = Ts_control
x_d_ctrl_list = ['i'] # states to consider in the reduction
z_ctrl_list = ['p'] # outputs to consider in the controller
z_obs_list = ['p'] # outputs to consider in the observer
syst.Δt = Δt
## Calculate equilibirum point
events=[{'p':1000}]
syst.initialize(events,xy0=1)
print(syst.get_value('p'))
ssa.eval_ss(syst)
z_ctrl_idxs = [syst.outputs_list.index(item) for item in z_ctrl_list]
z_obs_idxs = [syst.outputs_list.index(item) for item in z_obs_list]
# linear continous plant
A_p = syst.A
B_p = syst.B
C_p = syst.C
D_p = syst.D
N_z_p,N_x_p = C_p.shape
N_x_p,N_u_p = B_p.shape
x_ctrl_keep_idx = [syst.x_list.index(item) for item in x_d_ctrl_list]
x_ctrl_elim_idx = list(set(range(N_x_p)) - set(x_ctrl_keep_idx))
# Reduction and discretization ##################################################################
if len(x_ctrl_elim_idx) == 0:
# without reduction:
A_d,B_d,C_d,D_d,Dt = sctrl.cont2discrete((A_p,B_p,C_p,D_p),Ts_control,method='zoh')
T_r = np.eye(N_x_p)
else:
# with reduction:
sys = ctrl.ss(A_p, B_p, C_p, D_p)
T_r =np.eye(N_x_p)
rsys = ctrl.modred(sys, x_ctrl_elim_idx, method='truncate')
A_pr,B_pr,C_pr,D_pr = rsys.A,rsys.B,rsys.C,rsys.D
A_d,B_d,C_d,D_d,Dt = sctrl.cont2discrete((A_pr,B_pr,C_pr,D_pr),Ts_control,method='zoh')
T_r = np.delete(np.eye(N_x_p),x_ctrl_elim_idx,axis=0)
N_z_d,N_x_d = C_d.shape
N_x_d,N_u_d = B_d.shape
syst.T_r = T_r # Full linear discrete states to reduced (truncated) states
syst.A_d = A_d
syst.B_d = B_d
# Controller ##################################################################################
C_c = C_d[z_ctrl_idxs,:]
D_c = D_d[z_ctrl_idxs,:]
N_z_c,N_x_c = C_c.shape
O_ux = np.zeros((N_u_d,N_x_d))
O_xu = np.zeros((N_x_d,N_u_d))
O_uu = np.zeros((N_u_d,N_u_d))
I_uu = np.eye(N_u_d)
# discretized plant:
# Δx_d = A_d*Δx_d + B_d*Δu_d
# Δz_c = C_c*Δx_d + D_c*Δu_d
# delay in the input:
# Δx_d = A_d*Δx_d + B_d*Δx_r
# Δz_c = C_c*Δx_c
# Δx_r = Δu_d
# dinamic extension:
# Δx_d = A_d*Δx_d + B_d*Δx_r
# Δx_r = Δu_d
# Δx_i = Δx_i + Δt*(Δz_c-Δz_c_ref) = Δx_i + Δt*C_c*Δx_d - Dt*Δz_c_ref
# Δz_c = z_c - z_c_0
# Δz_c_ref = z_c_ref - z_c_0
# (Δz_c-Δz_c_ref) = z_c - z_c_ref
A_e = np.block([
[ A_d, B_d, O_xu], # Δx_d
[ O_ux, O_uu, O_uu], # Δx_r
[ Δt*C_c, O_uu, I_uu], # Δx_i
])
B_e = np.block([
[ O_xu],
[ I_uu],
[ O_uu],
])
# weighting matrices
Q_c = np.eye(A_e.shape[0])
R_c = np.diag([1])
K_c,S_c,E_c = ssa.dlqr(A_e,B_e,Q_c,R_c)
E_c = np.log(E_c)/Δt
syst.x_ctrl_keep_idx = x_ctrl_keep_idx
syst.x_ctrl_elim_idx = x_ctrl_elim_idx
syst.z_ctrl_list = z_ctrl_list
syst.N_z_c = N_z_c
syst.N_u_d = N_u_d
syst.x_ctrl = np.zeros((N_u_d+N_z_d,1))
syst.K_c = K_c
syst.C_c = C_c
syst.D_c = D_c
# Observer ###########################################################
# discretized plant:
# Dx_d = A_d*Dx_d + B_d*Du_d
# z_o = C_o*Dx_d + D_o*Du_d
# x_o = A_d*x_o + B_d*u_d + L_o*(z_o - C_o*x_o - D_o*Du_d)
C_o = C_d[z_obs_idxs,:]
D_o = D_d[z_obs_idxs,:]
N_z_o = C_o.shape[0]
Q_o = np.eye(A_d.shape[0])
R_o = np.diag([1]*N_z_o)
K_o_T,S_o,E_o = ssa.dlqr(A_d.T,C_o.T,Q_o,R_o)
K_o = K_o_T.T
syst.K_o = K_o
syst.C_o = C_o
syst.D_o = D_o
syst.z_obs_list = z_obs_list
syst.N_z_o = N_z_o
syst.x_obs = np.zeros((N_x_d,1))
print('damp_ctrl',-E_c.real/np.abs(E_c))
print('damp_obs',-E_o.real/np.abs(E_o))
E_c
def control(syst,z_ref):
Δt = syst.Δt # disctretization time
T_r = syst.T_r # Full linear discrete states to reduced (truncated) states
x_d = T_r @ syst.struct[0].x # plant dynamic states (discrete)
x_d_0 = syst.x_d_0 # plant initial steady state (discrete)
Δx_d = x_d - x_d_0 # plant dynamic states increment (discrete)
x_r_0 = syst.x_r_0 # initial delays values
Δx_e = np.copy(syst.Δx_e) # extended plant dynamic states increment (discrete)
u_d = np.copy(syst.u_d) # plant inputs
u_d_0 = syst.u_d_0 # plant initial steady inputs (discrete)
Δu_d = syst.Δu_d # plant inputs increment (discrete)
z_c_0 = np.copy(syst.z_c_0) # controller considered initial plant ouputs
z_o_0 = np.copy(syst.z_o_0) # observer considered initial plant ouputs
A_d = syst.A_d # plant A matrix without delay and integrators extension (discrete)
B_d = syst.B_d # plant B matrix without delay and integrators extension (discrete)
C_c = syst.C_c # controller considered plant C matrix (discrete)
D_c = syst.D_c # controller considered plant D matrix (discrete)
C_o = syst.C_o # observer considered plant C matrix (discrete)
D_o = syst.D_o # observer considered plant D matrix (discrete)
N_x_d,N_u_d = B_d.shape # dimensions
N_z_c,N_x_c = C_c.shape
N_z_o,N_x_o = C_o.shape
K_c = syst.K_c # controller gains matrix
K_o = syst.K_o # observer gains matrix
Δx_c = np.copy(syst.x_ctrl) # controler states
Δx_r = Δx_c[:syst.N_u_d,:] # delay discrete states
Δx_i = Δx_c[syst.N_u_d:,:] # integrators discrete states
Δx_o = syst.x_obs # observer states
# outputs to control
z_ctrl_values_list = []
for item in syst.z_ctrl_list:
z_ctrl_values_list += [syst.get_value(item)]
z_c = np.array(z_ctrl_values_list).reshape(N_z_c,1)
Δz_c = (z_c-z_c_0) - (z_ref-z_c_0)
# outputs for observer
z_obs_values_list = []
for item in syst.z_obs_list:
z_obs_values_list += [syst.get_value(item)]
z_o = np.array(z_obs_values_list).reshape(N_z_o,1)
Δz_o = z_o - z_o_0
# control dynamics
Δx_r_k1 = Δu_d
Δx_i_k1 = Δx_i + Δt*Δz_c
# control law
Δu_d = -K_c @ Δx_e
# observer dynamics
Δx_o_m1 = A_d @ Δx_o + B_d@Δu_d + K_o @ (Δz_o - C_o @ Δx_o - D_o @ Δu_d)
# save statates for next step
syst.x_ctrl[:syst.N_u,:] = np.copy(Δx_r_k1)
syst.x_ctrl[syst.N_u:,:] = np.copy(Δx_i_k1)
syst.x_obs = np.copy(Δx_o_m1)
if syst.observer == False:
syst.Δx_e = np.copy(np.block([[Δx_d],[Δx_r_k1],[Δx_i_k1]]))
if syst.observer == True:
syst.Δx_e = np.copy(np.block([[Δx_o_m1],[Δx_r_k1],[Δx_i_k1]]))
syst.Δu_d = Δu_d
# outputs for post-processing
syst.z_obs = C_o @ Δx_o + D_o @ Δu_d
return Δx_r_k1 + x_r_0
B_d
###Output
_____no_output_____
###Markdown
Initialization
###Code
times = np.arange(0.0,20,Δt)
## Calculate initial references
events=[{'p':1000}]
syst.initialize(events,xy0=1.0)
# initial inputs
u_values_list = []
for item in syst.u_run_list:
u_values_list += [syst.get_value(item)]
u_d_0 = np.array(u_values_list).reshape(syst.N_u_d,1)
# initial states
x_d_0 = T_r @ syst.struct[0].x # initial plant states
x_r_0 = u_d_0 # initial delay states
x_i_0 = np.zeros((syst.N_u_d,1)) # initial integrator states
Δx_e = np.block([[x_d_0*0],[x_r_0*0],[x_i_0*0]])
# initial outputs
z_ctrl_values_list = []
for item in syst.z_ctrl_list:
z_ctrl_values_list += [syst.get_value(item)]
z_c_0 = np.array(z_ctrl_values_list).reshape(N_z_c,1)
# outputs for observer
z_obs_values_list = []
for item in syst.z_obs_list:
z_obs_values_list += [syst.get_value(item)]
z_o_0 = np.array(z_obs_values_list).reshape(N_z_o,1)
syst.Δx_e = Δx_e
syst.Δu_d = np.zeros((syst.N_u_d,1))
syst.x_d_0 = x_d_0
syst.x_r_0 = x_r_0
syst.u_d_0 = u_d_0
syst.z_c_0 = z_c_0
syst.z_o_0 = z_o_0
syst.u_d = np.copy(u_d_0)
syst.x_ctrl = np.block([[x_r_0],[x_i_0]])*0
syst.x_obs = np.copy(x_d_0)*0
syst.observer = False
###Output
_____no_output_____
###Markdown
Simulation
###Code
T_ctrl = np.zeros((len(times),1))
X_ctrl = np.zeros((len(times),N_u_d*2))
U_ctrl = np.zeros((len(times),2))
X_obs = np.zeros((len(times),N_x_d))
Z_obs = np.zeros((len(times),N_z_o))
it = 0
for t in times:
# references
p_ref = 1000
if t>=1:
p_ref = 1200
z_ref = np.copy(np.array([[p_ref],
]))
u_d = control(syst,z_ref)
v_1 = u_d[0,0]
events=[{'t_end':t,'v_1':v_1}]
syst.run(events)
#X_ctrl[it,:] = np.hstack((x_i.T))
U_ctrl[it,:] = u_d.T
X_ctrl[it,:] = syst.x_ctrl.T
X_obs[it,:] = syst.x_obs.T
Z_obs[it,:] = syst.z_obs.T
it += 1
syst.post();
###Output
_____no_output_____
###Markdown
Post processing
###Code
plt.close('all')
fig, axes = plt.subplots(nrows=2,ncols=2, figsize=(10, 5), frameon=False, dpi=80)
#axes[0,0].plot(syst.T, syst.get_values('omega'), label=f'$\omega$')
#axes[0,0].step(times,X_obs[:,1] + x_d_0[1], label=f'$\hat \omega$')
axes[0,0].plot(syst.T, syst.get_values('p'), label=f"$p$")
axes[1,0].plot(syst.T, syst.get_values('v_1'), label=f"$v_1$")
for ax in axes.flatten():
ax.legend(loc='best')
ax.grid(True)
###Output
_____no_output_____
###Markdown
Instantiate system
###Code
syst = rl_2sources_class()
###Output
_____no_output_____
###Markdown
Solve steady state
###Code
syst.initialize()
syst.report_x()
syst.report_y()
syst.report_u()
###Output
i = 10.00
p = 1000.00
v_1 = 101.00
|
wienerschnitzelgemeinschaft/src/Kevin/Code/ensemble_averge.ipynb | ###Markdown
Ensemble 4filenames = ['pred_resnext50_rgb_aug_2e-4_a4_tta16.csv', 'pred_ResNext50_aug_2e-4_a5_tta16.csv', \ 'pred_t_ResNext50_alls_512_2e-4_a5_0532.csv']weights = [0.4, 0.4, 0.2]'HPAC_05_ResNext50_ensemble_4.csv'LB = 0.586'HPAC_val_avg_ResNext50_ensemble_2.csv' th_weights = [0.34, 0.33, 0.33]th = [0.51826031 0.51834734 0.44045049 0.43685385 0.41942717 0.45609431 0.46838908 0.4533698 0.56985803 0.46896371 0.48165665 0.5228005 0.46618311 0.49207598 0.52469812 0.38642988 0.56906955 0.61506067 0.49123079 0.47437382 0.48727312 0.44976784 0.45648292 0.46852538 0.59508656 0.5028526 0.49060101 0.56784335] LB = 0.595 Ensemble 5filenames = ['pred_resnext50_rgb_aug_2e-4_a4_tta16.csv', 'pred_ResNext50_aug_2e-4_a5_tta16.csv', \ 'pred_t_ResNext50_alls_512_2e-4_a5_0532.csv'] weights = [0.5, 0.25, 0.25]'HPAC_05_ResNext50_ensemble_4.csv''HPAC_val_avg_ResNext50_ensemble_5a.csv'th_weights = [0.34, 0.33, 0.33]5a. (th_val_avg - 0.5)*1+0.5
###Code
# # Ensemble 4
# filenames = ['pred_resnext50_rgb_aug_2e-4_a4_tta16.csv', 'pred_ResNext50_aug_2e-4_a5_tta16.csv', \
# 'pred_t_ResNext50_alls_512_2e-4_a5_0532.csv']
# weights = [0.4, 0.4, 0.2]
# Ensemble 5
filenames = ['pred_resnext50_rgb_aug_2e-4_a4_tta16.csv', 'pred_ResNext50_aug_2e-4_a5_tta16.csv', \
'pred_t_ResNext50_alls_512_2e-4_a5_0532.csv']
weights = [0.5, 0.25, 0.25]
# 'HPAC_05_ResNext50_ensemble_5.csv' LB = 0.586
# 'HPAC_val_avg_ResNext50_ensemble_5.csv'
# th_weights = [0.34, 0.33, 0.33]
# 5a. (th_val_avg - 0.5)*1+0.5
# # Ensemble 6
# filenames = ['pred_resnext50_rgb_aug01_2e-4_a3.csv', 'pred_ResNext50_aug_2e-4_a5_tta16.csv', \
# 'pred_t_ResNext50_alls_512_2e-4_a5_0532.csv']
# weights = [0.5, 0.25, 0.25]
# Ensemble 7 Augmentation Only
filenames = ['pred_resnext50_rgb_aug01_2e-4_a3.csv','pred_resnext50_rgb_aug_2e-4_a4_tta16.csv', \
'pred_ResNext50_aug_2e-4_a5_tta16.csv']
weights = [0.4, 0.3, 0.3]
filenames = ['pred_resnext50_rgb_aug01_2e-4_a3.csv']
weights = [1.0]
# Ensemble 8
filenames = ['pred_resnext50_rgb_aug01_2e-4_a3.csv','pred_resnext50_rgb_aug_2e-4_a4_tta16.csv', \
'pred_ResNext50_aug_2e-4_a5_tta16.csv','pred_t_ResNext50_alls_512_2e-4_a5_0532.csv']
weights = [0.25, 0.25, 0.25, 0.25]
#leak_HPAC_05_ResNext50_ensemble_7 0.589
#leak_HPAC_05_ResNext50_ensemble_8 0.594
#leak_HPAC_avg_ResNext50_ensemble_8 0.576
#leak_HPAC_avg_ResNext50_ensemble_7 0.578
# Ensemble 9
filenames = ['pred_resnext50_g_aug01_2e-4_a4.csv', 'pred_resnext50_rgb_aug01_2e-4_a3.csv', \
'pred_resnext50_rgb_aug_2e-4_a4_tta16.csv', \
'pred_ResNext50_aug_2e-4_a5_tta16.csv']
weights = [0.25, 0.25, 0.25, 0.25]
# Ensemble 10
filenames = ['pred_resnext50_g_aug01_2e-4_a4.csv', 'pred_resnext50_rgb_aug01_2e-4_a3.csv']
weights = [0.4, 0.6]
# Ensemble 11
filenames = ['pred_resnext50_g_aug01_2e-4_a4.csv', 'pred_resnext50_rgb_aug01_2e-4_a3.csv', \
'pred_resnext50_rgb_aug_2e-4_a4_tta16.csv', 'pred_t_ResNext50_alls_512_2e-4_a5_0532.csv', \
'pred_ResNext50_aug_2e-4_a5_tta16.csv']
weights = [0.2, 0.2, 0.2, 0.2, 0.2]
def ensemble_pred_t(path, filenames, weights):
df_tmp = pd.read_csv( path + filenames[0], header=None)
pred_t_avg = np.zeros(df_tmp.shape)
for idx, fn in enumerate(filenames):
df_pred_t = pd.read_csv( path + fn, header=None)
pred_t_avg += weights[idx] * df_pred_t.values
return pred_t_avg
pred_t_avg = ensemble_pred_t(path,filenames,weights)
pred_t_avg[0]
def save_pred_ensemble(pred_t_avg, th=0.5, fname='HPAC_ensemble.csv'):
pred_list = []
for line in pred_t_avg:
s = ' '.join(list([str(i) for i in np.nonzero(line>th)[0]]))
pred_list.append(s)
#print(len(pred_list))
sample_df = pd.read_csv(SAMPLE)
sample_list = list(sample_df.Id)
df = pd.DataFrame({'Id':sample_list,'Predicted':pred_list})
df.to_csv(fname, header=True, index=False)
# th_val = [0.5442, 0.48696, 0.49523, 0.47363, 0.49234, 0.53067, 0.51578, 0.52985, 0.57223, 0.44015, 0.39442, 0.52157, 0.5529,
# 0.47324, 0.53367, 0.33757, 0.54152, 0.5405, 0.5094, 0.49972, 0.34266, 0.46905, 0.48302, 0.49949, 0.67206, 0.49534,
# 0.52171, 0.1]
th_val = 0.5
save_pred_ensemble(pred_t_avg, th_val, path_out + 'HPAC_05_ResNext50_ensemble_10.csv') #
###Output
_____no_output_____
###Markdown
Average threholds
###Code
import numpy as np
val_th0 = np.loadtxt('../results/th_t_ResNext50_ext_rgby_512_4e-4_a5')
print(val_th0)
val_th1 = np.loadtxt('../results/th_val_resnext50_rgb_aug01_2e-4_a3')
print(val_th1)
val_th4 = np.loadtxt('../results/val_th_resnext50_g_aug01_2e-4_a4')
print(val_th4)
th_log05_03 = np.loadtxt('../results/th_log05_03')
print(th_log05_03)
# val_th_aug01_1 = np.loadtxt('../results/th_val_resnext50_rgb_aug01_2e-4_a3')
# print(val_th_aug01_1)
# val_th_aug01_2 = np.loadtxt('../results/HPAC_lab_resnext50_rgb_aug01_2e-4_a3')
# print(val_th_aug01_2)
# val_th_test2 = np.loadtxt('../results/th_t_ResNext50_alls_2e-4_a6')
# print(val_th_test2)
val_th2 = np.loadtxt('../results/th_val_resnext50_rgb_aug_2e-4_a4_tta16')
print(val_th2)
val_th3 = np.loadtxt('../results/th_val_ResNext50_aug_2e-4_a4_tta16')
print(val_th3)
val_th_7 = np.zeros((len(val_th1),3))
for i, val in enumerate([val_th1, val_th2, val_th3]):
val_th_7[:,i] = np.array(val)
print(val_th_7.mean(axis=-1))
print(val_th_7.max(axis=-1))
print(val_th_7.min(axis=-1))
val_th_avg = (val_th0 + val_th1 + val_th2 + val_th3)/4
print(val_th_avg)
val_th_aug01 = (val_th_aug01_1+val_th_aug01_2)/2
save_pred_ensemble(pred_t_avg, val_th_aug01, path_out + 'HPAC_val_lab_ResNext50_aug01.csv')
# val_th_avg = val_th1 * weights[0] +val_th2 * weights[1] + val_th3 * weights[2]
# print(val_th_avg)
save_pred_ensemble(pred_t_avg, val_th_7, path_out + 'HPAC_val_ResNext50_ensemble_7.csv')
# save_pred_ensemble(pred_t_avg, val_th_avg, path_out + 'HPAC_val_ResNext50_ensemble_8.csv')
save_pred_ensemble(pred_t_avg, val_th_7.max(axis=-1), path_out + 'HPAC_valmax_ResNext50_ensemble_7.csv')
save_pred_ensemble(pred_t_avg, val_th_7.min(axis=-1), path_out + 'HPAC_valmin_ResNext50_ensemble_7.csv')
#ensemble 11 #LB 0.572, th=0.5, LB 0.593
val_th_avg_11 = (val_th0 + val_th1 + val_th2 + val_th3 + val_th4)/5
print(val_th_avg_11)
save_pred_ensemble(pred_t_avg, val_th_avg_11, path_out + 'HPAC_val_avg_ResNext50_ensemble_11.csv')
###Output
[0.52319359 0.54673799 0.47649713 0.50615913 0.46993325 0.47419711
0.49483484 0.4982776 0.59298033 0.59888807 0.60769972 0.60895199
0.50433908 0.53275457 0.53148286 0.49746853 0.55790926 0.64140899
0.51004652 0.51235629 0.55710059 0.45664677 0.4898247 0.50063658
0.63733824 0.50046565 0.54752545 0.63177671]
###Markdown
majority vote
###Code
def get_predict(path_in, fn):
pred = np.zeros((11702,28))
df_tmp = pd.read_csv( path_in + fn)
null_idx = df_tmp[df_tmp['Predicted'].isnull()]['Predicted'].index
for idx,s in enumerate(df_tmp['Predicted']):
if idx in null_idx:
pred[idx,:] = np.zeros(28,dtype=np.int)
else:
label_idx = [int(i) for i in df_tmp['Predicted'][idx].split()]
pred[idx,:] = np.eye(28,dtype=np.float)[label_idx].sum(axis=0)
return pred
def m_vote(path_in, path_out, filenames, fn_voted = 'voted_5_a.csv'):
pred_sum = np.zeros((11702,28))
for fn in filenames:
pred = get_predict(path_in, fn)
pred_sum += pred
# set threshold for majority votes
th_mv = len(filenames)/2.0
th_mv = 0.5
pred_list = []
for ns in range(pred_test.shape[0]):
s = ' '.join(list([str(i) for i in np.nonzero(pred_test[ns,:]> th_mv)[0]]))
pred_list.append(s)
sample_df = pd.read_csv(SAMPLE)
sample_list = list(sample_df.Id)
df = pd.DataFrame({'Id':sample_list,'Predicted':pred_list})
df.to_csv(path_out+fn_voted, header=True, index=False)
return pred_sum
# fns.append('leak_HPAC_val_avg_ResNext50_ensemble_4.csv')
# fns.append('leak_HPAC_val_avg_ResNext50_ensemble_5a.csv')
# fns.append('leak_HPAC_05_ResNext50_ensemble_9.csv')
# fns.append('leak_HPAC_05_ResNext50_ensemble_10a.csv')
# fns.append('leak_HPAC_05_ResNext50_ensemble_11.csv')
# pred_test = m_vote(path_in, path_out, fns, 'voted_5_d.csv')
# th_mv = 2.5 LB = 0.592 5b
# th_mv = 1.5 LB = 0.599 5c
# th_mv = 0.5 LB = 0.602 5d
import numpy as np
import pandas as pd
path = '../submit/'
path_in = '../submit/'
SAMPLE = '../input/sample_submission.csv'
path_out = '../submit/'
fns = []
fns.append('leak_HPAC_val_avg_ResNext50_ensemble_4.csv')
fns.append('leak_HPAC_val_avg_ResNext50_ensemble_5a.csv')
fns.append('leak_HPAC_05_ResNext50_ensemble_9.csv')
fns.append('leak_HPAC_05_ResNext50_ensemble_10a.csv')
#fns.append('leak_HPAC_05_ResNext50_ensemble_11.csv')
fns.append('leak_HPAC_th_manual_3_resnext50_rgb_aug005_a4_tta16.csv')
pred_test = m_vote(path_in, path_out, fns, 'voted_5_e.csv')
test = np.nonzero(pred_test>=1.5)
test[0]
unique, counts = np.unique(test[1], return_counts=True)
unique, counts
###Output
_____no_output_____ |
BI Project.ipynb | ###Markdown
PROJECT 2: FRAUDULENT CREDIT CARDS TRANSACTIONS ContextA bank is interested in providing higher quality customer service to protect customers financialassets. The bank has been receiving several complaints about credit card frauds from theircustomers and the news media is regularly reporting about how the bank's customers are losinglarge amounts of money and the bank is doing nothing to stop it. This is impacting both thecustomers experience and their market share. The Senior Management is asking for a deep diveinto this issue.You just got hired as the Business Analyst for the bank, and they provided you with 6 months ofavailable data (step 0 to 179 refer to the dates). They want you to share some insights using thefeatures in the file to determine if you can see a pattern for the fraudulent transactions. They areexpecting you to provide some suggestions on how to tackle the problem. Questions1. Show a summary of the variable you are studying (target variable). Plot the mostappropriate graph to represent this data2. Calculate summary statistics from the data3. Calculation of daily trends of transactions for different categories of variables4. What are your thoughts on the fraudulent transactions? Is there a threshold of thespent? Is there a specific ‘gender’ with a higher probability to be the victim of afraudulent act? or ‘category’ of transactions with higher chance to be fraudulent?5. What are your recommendations to the bank's management and describe howyour solution will help regain trust from customers?6. Any other data that you would ask the team to provide? Why? I- Data Cleaning
###Code
# Importing our librairies
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import chi2_contingency
sns.set()
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
Data understanding
###Code
# Loading our dataset
customer_data = pd.read_csv('../BI Project 2/dataset/bs140513_032310.csv')
customer_data1 = pd.read_csv('../BI Project 2/dataset/bsNET140513_032310.csv')
customer_data.head()
###Output
_____no_output_____
###Markdown
We can see a lot of the data in our dataset that are in quotations marks,We will remove it by creating a function to automate the process
###Code
customer_data.info()
def remove_quotes (df,list1) :
'''
This function allows us to remove the quotations marks in the data.
@df The dataset
@list1 The list of the columns we will remove the variable
'''
try :
for i in list1 :
y = pd.DataFrame(df[i].str.split("'",2).tolist(),columns= ['l','age','l'])
y = y.iloc[:,[1]]
df[i] = y
y= 0
except (KeyError , AssertionError , ValueError):
customer_data = pd.read_csv('../BI Project 2/dataset/bs140513_032310.csv')
###Output
_____no_output_____
###Markdown
II- Data Transformation for analysis
###Code
def age_buckets(x):
''' Function created to categorize Age
'''
if x == '0':
return '18(-)'
elif x == '1' :
return '19-25'
elif x == '2':
return '26-35'
elif x =='3':
return '36-45'
elif x == '4':
return '46-55'
elif x == '5':
return '56-65'
elif x == '6':
return '65(+)'
else :
return 'Unknown'
customer_data['Agegroup'] = customer_data.age.apply(age_buckets)
remove_quotes(df = customer_data, list1 = ['customer','age','gender','zipcodeOri','merchant','zipMerchant','category'])
customer_data
customer_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 594643 entries, 0 to 594642
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 step 594643 non-null int64
1 customer 594643 non-null object
2 age 594643 non-null object
3 gender 594643 non-null object
4 zipcodeOri 594643 non-null object
5 merchant 594643 non-null object
6 zipMerchant 594643 non-null object
7 category 594643 non-null object
8 amount 594643 non-null float64
9 fraud 594643 non-null int64
10 Agegroup 594643 non-null object
dtypes: float64(1), int64(2), object(8)
memory usage: 49.9+ MB
###Markdown
III- Exploratory Data Analysis 1- Show a summary of the variable you are studying (target variable).Plot the most appropriate graph to represent this data.
###Code
fraud = customer_data['fraud'].value_counts().to_frame()
fraud['Percent'] = customer_data['fraud'].value_counts(normalize = True).to_frame()
fraud
def generate_barchart(data, title ="",abs_value ="Total",rel_value="Percent",figsize =(15,6)):
'''
This function will help us generate bar chart with data label
'''
plt.figure(figsize=figsize)
axes = sns.barplot(data=data,x=data.index,y=abs_value)
i=0
for tot, perc in zip(data[abs_value],data[rel_value]):
axes.text(i ,
tot/2,
str(np.round(perc*100,2))+ "%",
fontdict=dict(color='White',fontsize=12,horizontalalignment="center")
)
axes.text(i,
tot+ 3,
str(tot),
fontdict=dict(color='blue',fontsize=12,horizontalalignment="center")
)
i+=1
plt.title(title)
sns.despine(left=True, bottom=True)
change_width(axes,0.35)
plt.show()
def change_width(ax, new_value) :
'''
This function is created to improve the looks of our barchart
'''
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
generate_barchart(fraud,title = 'Transactions of the customers', abs_value="fraud", rel_value="Percent")
###Output
_____no_output_____
###Markdown
This bar Chart shows us the percentage transactions of the customers that are fraud or not.
###Code
customer_data[['amount','fraud']].describe()
###Output
_____no_output_____
###Markdown
Correlation table between Amount and Fraud
###Code
display(customer_data[['amount','fraud']].corr())
sns.heatmap(customer_data[['amount','fraud']].corr())
###Output
_____no_output_____
###Markdown
2 - Summary Descriptive of our data.
###Code
#Creating a Graph for the non fraudulent transactions
m = customer_data[ customer_data['fraud'] == 0]
display(m['amount'].describe())
sns.histplot(m['amount'], kde= True)
plt.xlim(0,500)
#Creating a graph for the fraudulent transactions
v = customer_data[ customer_data['fraud'] == 1]
display(v['amount'].describe())
sns.histplot(v['amount'], kde= True)
plt.xlim(0,2000)
# creating a pivot table to shows the transaction by category
result = customer_data.pivot_table(values = 'amount', index = 'step', columns = 'category', aggfunc= 'sum').fillna(0)
print('Summary of the amount of transactions money by category')
display(result.describe())
def prob_category(data=customer_data,col="category", abs_value ="Total",rel_value ="Percent",show_plot=False, title=""):
# absolute value
res1 = data[col].value_counts().to_frame()
res1.columns = [abs_value]
res2 = customer_data[col].value_counts(normalize=True).to_frame()
res2.columns = [rel_value]
if not show_plot:
return pd.concat([res1,res2],axis=1)
else:
result = pd.concat([res1,res2],axis=1)
generate_barchart(data=result, title =title,abs_value =abs_value,rel_value=rel_value,figsize =(15,6))
return result
prob_category(customer_data, col= "category",show_plot=True)
###Output
_____no_output_____
###Markdown
Victims of fraudulent transactions ?
###Code
age_table = pd.crosstab(customer_data.Agegroup, customer_data.fraud)
age_table['Percent'] = age_table[1]/7200
age_table
generate_barchart(age_table,title = 'Victims of Fraudulent Transactions by age Group', abs_value=1, rel_value="Percent")
###Output
_____no_output_____
###Markdown
3- Calculations of daily trends of transactions for different categories of variables
###Code
fraud_gender_m = customer_data[(customer_data['fraud'] == 1) & (customer_data['gender'] == 'F') ][['step','gender','amount']]
fraud_gender_f = customer_data[(customer_data['fraud'] == 1) & (customer_data['gender'] == 'M') ][['step','gender','amount']]
fraud_gender_f = fraud_gender_f.pivot_table(index = 'step', columns = 'gender', aggfunc = 'sum')
fraud_gender_m = fraud_gender_m.pivot_table(index = 'step', columns = 'gender', aggfunc = 'sum')
gender_transactions = customer_data.pivot_table(values = 'amount', index = 'step', columns = 'gender', aggfunc= 'sum')
gender_transactions = gender_transactions.iloc[:,[1,2]]
plt.figure(figsize=(20, 10))
plt.plot(gender_transactions)
plt.plot(fraud_gender_f,linestyle = '--')
plt.plot(fraud_gender_m, linestyle = '--')
plt.title('Daily trends of transactions by Gender')
plt.xlabel('Day')
plt.ylabel('gender')
plt.legend(['F','M','Male Fraud','Female Fraud'])
plt.show()
###Output
_____no_output_____
###Markdown
4 - Is there a specific ‘gender’ with a higher probability to be the victim of a fraudulent act ?
###Code
gender_table = pd.crosstab(customer_data.fraud, customer_data.gender, margins=True).T
display(gender_table)
gender_table.iloc[:-1,[1]].plot(kind='barh')
plt.title('Number of Fraud by gender')
###Output
_____no_output_____
###Markdown
By looking at the graph, you can see woman are more often victims of fraud, but we will need to do a hypotesis testing
###Code
display(gender_table)
print (" HO : The variables are independent \n")
print (" H1 : The variables are dependent \n")
stat, p, dof, expected = chi2_contingency(gender_table)
alpha = 0.05
print("p value is " + str(p))
if p <= alpha:
print('Dependent (reject H0)')
else:
print('Independent (H0 holds true)')
###Output
_____no_output_____
###Markdown
To confirm that women have the highest probability to be victims of frauds more often, we will use the bayesian formula to confirm our hypothesis.
###Code
gender_table['Percent'] = gender_table[1]/7200
display(gender_table)
#
#Probabilty to be a woman and be victim of fraud
probability_female = customer_data[customer_data['gender'] == 'F'].shape[0]/ customer_data.shape[0]
probability_to_be_victim = 4758/7200
probability_to_be_victim_and_female = probability_female * probability_to_be_victim /(probability_female * probability_to_be_victim + (1-probability_female)*(1-probability_to_be_victim))
print("There's " ,np.round(probability_to_be_victim_and_female*100,2),"% probability that a customer victim of fraud knowing she's a women ")
###Output
_____no_output_____
###Markdown
Category of transactions with higher chance to be fraudulent ?
###Code
#Creating a pivot table for fraud and category
fraudulent_table = customer_data.pivot_table(index=["category"], columns = "fraud", values = 'customer', aggfunc = 'count',margins = True)
fraudulent_table= fraudulent_table.fillna(0).sort_values(by= 1)
display(fraudulent_table.iloc[:-1,1].sort_values(ascending = False))
fraudulent_table.iloc[:-1,1].T.plot(kind='barh')
print (" HO : The variables are independent \n")
print (" H1 : The variables are dependent \n")
stat, p, dof, expected = chi2_contingency(fraudulent_table)
alpha = 0.05
print("p value is " + str(p))
if p <= alpha:
print('Dependent (reject H0)')
else:
print('Independent (H0 holds true)')
probability_sport = customer_data[customer_data['category'] == 'es_sportsandtoys'].shape[0]/ customer_data.shape[0]
probability_to_be_victim = 1982/7200
probability_to_be_victim_and_sport = probability_sport * probability_to_be_victim /(probability_sport * probability_to_be_victim + (1-probability_sport)*(1-probability_to_be_victim))
print("There's a " ,probability_to_be_victim_and_sport,"% probability that a person can be a victim of fraud knowing the transaction was made in sport and toy category ")
e = pd.crosstab(customer_data.step,customer_data.fraud, margins = True)
display(e)
e.iloc[:,1].value_counts()
###Output
_____no_output_____
###Markdown
PROJECT 2: FRAUDULENT CREDIT CARDS TRANSACTIONS ContextA bank is interested in providing higher quality customer service to protect customers financialassets. The bank has been receiving several complaints about credit card frauds from theircustomers and the news media is regularly reporting about how the bank's customers are losinglarge amounts of money and the bank is doing nothing to stop it. This is impacting both thecustomers experience and their market share. The Senior Management is asking for a deep diveinto this issue.You just got hired as the Business Analyst for the bank, and they provided you with 6 months ofavailable data (step 0 to 179 refer to the dates). They want you to share some insights using thefeatures in the file to determine if you can see a pattern for the fraudulent transactions. They areexpecting you to provide some suggestions on how to tackle the problem. Questions1. Show a summary of the variable you are studying (target variable). Plot the mostappropriate graph to represent this data2. Calculate summary statistics from the data3. Calculation of daily trends of transactions for different categories of variables4. What are your thoughts on the fraudulent transactions? Is there a threshold of thespent? Is there a specific ‘gender’ with a higher probability to be the victim of afraudulent act? or ‘category’ of transactions with higher chance to be fraudulent?5. What are your recommendations to the bank's management and describe howyour solution will help regain trust from customers?6. Any other data that you would ask the team to provide? Why? I- Data Cleaning
###Code
# Importing our librairies
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import chi2_contingency
sns.set()
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
Data understanding
###Code
# Loading our dataset
customer_data = pd.read_csv('../BI Project 2/dataset/bs140513_032310.csv')
customer_data1 = pd.read_csv('../BI Project 2/dataset/bsNET140513_032310.csv')
customer_data.head()
###Output
_____no_output_____
###Markdown
We can see a lot of the data in our dataset that are in quotations marks,We will remove it by creating a function to automate the process
###Code
customer_data.info()
def remove_quotes (df,list1) :
'''
This function allows us to remove the quotations marks in the data.
@df The dataset
@list1 The list of the columns we will remove the variable
'''
try :
for i in list1 :
y = pd.DataFrame(df[i].str.split("'",2).tolist(),columns= ['l','age','l'])
y = y.iloc[:,[1]]
df[i] = y
y= 0
except (KeyError , AssertionError , ValueError):
customer_data = pd.read_csv('../BI Project 2/dataset/bs140513_032310.csv')
###Output
_____no_output_____
###Markdown
II- Data Transformation for analysis
###Code
def age_buckets(x):
''' Function created to categorize Age
'''
if x == '0':
return '18(-)'
elif x == '1' :
return '19-25'
elif x == '2':
return '26-35'
elif x =='3':
return '36-45'
elif x == '4':
return '46-55'
elif x == '5':
return '56-65'
elif x == '6':
return '65(+)'
else :
return 'Unknown'
customer_data['Agegroup'] = customer_data.age.apply(age_buckets)
remove_quotes(df = customer_data, list1 = ['customer','age','gender','zipcodeOri','merchant','zipMerchant','category'])
customer_data
customer_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 594643 entries, 0 to 594642
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 step 594643 non-null int64
1 customer 594643 non-null object
2 age 594643 non-null object
3 gender 594643 non-null object
4 zipcodeOri 594643 non-null object
5 merchant 594643 non-null object
6 zipMerchant 594643 non-null object
7 category 594643 non-null object
8 amount 594643 non-null float64
9 fraud 594643 non-null int64
10 Agegroup 594643 non-null object
dtypes: float64(1), int64(2), object(8)
memory usage: 49.9+ MB
###Markdown
III- Exploratory Data Analysis 1- Show a summary of the variable you are studying (target variable).Plot the most appropriate graph to represent this data.
###Code
fraud = customer_data['fraud'].value_counts().to_frame()
fraud['Percent'] = customer_data['fraud'].value_counts(normalize = True).to_frame()
fraud
def generate_barchart(data, title ="",abs_value ="Total",rel_value="Percent",figsize =(15,6)):
'''
This function will help us generate bar chart with data label
'''
plt.figure(figsize=figsize)
axes = sns.barplot(data=data,x=data.index,y=abs_value)
i=0
for tot, perc in zip(data[abs_value],data[rel_value]):
axes.text(i ,
tot/2,
str(np.round(perc*100,2))+ "%",
fontdict=dict(color='White',fontsize=12,horizontalalignment="center")
)
axes.text(i,
tot+ 3,
str(tot),
fontdict=dict(color='blue',fontsize=12,horizontalalignment="center")
)
i+=1
plt.title(title)
sns.despine(left=True, bottom=True)
change_width(axes,0.35)
plt.show()
def change_width(ax, new_value) :
'''
This function is created to improve the looks of our barchart
'''
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
generate_barchart(fraud,title = 'Transactions of the customers', abs_value="fraud", rel_value="Percent")
###Output
_____no_output_____
###Markdown
This bar Chart shows us the percentage transactions of the customers that are fraud or not.
###Code
customer_data[['amount','fraud']].describe()
###Output
_____no_output_____
###Markdown
Correlation table between Amount and Fraud
###Code
display(customer_data[['amount','fraud']].corr())
sns.heatmap(customer_data[['amount','fraud']].corr())
###Output
_____no_output_____
###Markdown
2 - Summary Descriptive of our data.
###Code
#Creating a Graph for the non fraudulent transactions
m = customer_data[ customer_data['fraud'] == 0]
display(m['amount'].describe())
sns.histplot(m['amount'], kde= True)
plt.xlim(0,500)
#Creating a graph for the fraudulent transactions
v = customer_data[ customer_data['fraud'] == 1]
display(v['amount'].describe())
sns.histplot(v['amount'], kde= True)
plt.xlim(0,2000)
# creating a pivot table to shows the transaction by category
result = customer_data.pivot_table(values = 'amount', index = 'step', columns = 'category', aggfunc= 'sum').fillna(0)
print('Summary of the amount of transactions money by category')
display(result.describe())
def prob_category(data=customer_data,col="category", abs_value ="Total",rel_value ="Percent",show_plot=False, title=""):
# absolute value
res1 = data[col].value_counts().to_frame()
res1.columns = [abs_value]
res2 = customer_data[col].value_counts(normalize=True).to_frame()
res2.columns = [rel_value]
if not show_plot:
return pd.concat([res1,res2],axis=1)
else:
result = pd.concat([res1,res2],axis=1)
generate_barchart(data=result, title =title,abs_value =abs_value,rel_value=rel_value,figsize =(15,6))
return result
prob_category(customer_data, col= "category",show_plot=True)
###Output
_____no_output_____
###Markdown
Victims of fraudulent transactions ?
###Code
age_table = pd.crosstab(customer_data.Agegroup, customer_data.fraud)
age_table['Percent'] = age_table[1]/7200
age_table
generate_barchart(age_table,title = 'Victims of Fraudulent Transactions by age Group', abs_value=1, rel_value="Percent")
###Output
_____no_output_____
###Markdown
3- Calculations of daily trends of transactions for different categories of variables
###Code
fraud_gender_m = customer_data[(customer_data['fraud'] == 1) & (customer_data['gender'] == 'F') ][['step','gender','amount']]
fraud_gender_f = customer_data[(customer_data['fraud'] == 1) & (customer_data['gender'] == 'M') ][['step','gender','amount']]
fraud_gender_f = fraud_gender_f.pivot_table(index = 'step', columns = 'gender', aggfunc = 'sum')
fraud_gender_m = fraud_gender_m.pivot_table(index = 'step', columns = 'gender', aggfunc = 'sum')
gender_transactions = customer_data.pivot_table(values = 'amount', index = 'step', columns = 'gender', aggfunc= 'sum')
gender_transactions = gender_transactions.iloc[:,[1,2]]
plt.figure(figsize=(20, 10))
plt.plot(gender_transactions)
plt.plot(fraud_gender_f,linestyle = '--')
plt.plot(fraud_gender_m, linestyle = '--')
plt.title('Daily trends of transactions by Gender')
plt.xlabel('Day')
plt.ylabel('gender')
plt.legend(['F','M','Male Fraud','Female Fraud'])
plt.show()
###Output
_____no_output_____
###Markdown
4 - Is there a specific ‘gender’ with a higher probability to be the victim of a fraudulent act ?
###Code
gender_table = pd.crosstab(customer_data.fraud, customer_data.gender, margins=True).T
display(gender_table)
gender_table.iloc[:-1,[1]].plot(kind='barh')
plt.title('Number of Fraud by gender')
###Output
_____no_output_____
###Markdown
By looking at the graph, you can see woman are more often victims of fraud, but we will need to do a hypotesis testing
###Code
display(gender_table)
print (" HO : The variables are independent \n")
print (" H1 : The variables are dependent \n")
stat, p, dof, expected = chi2_contingency(gender_table)
alpha = 0.05
print("p value is " + str(p))
if p <= alpha:
print('Dependent (reject H0)')
else:
print('Independent (H0 holds true)')
###Output
_____no_output_____
###Markdown
To confirm that women have the highest probability to be victims of frauds more often, we will use the bayesian formula to confirm our hypothesis.
###Code
gender_table['Percent'] = gender_table[1]/7200
display(gender_table)
#
#Probabilty to be a woman and be victim of fraud
probability_female = customer_data[customer_data['gender'] == 'F'].shape[0]/ customer_data.shape[0]
probability_to_be_victim = 4758/7200
probability_to_be_victim_and_female = probability_female * probability_to_be_victim /(probability_female * probability_to_be_victim + (1-probability_female)*(1-probability_to_be_victim))
print("There's " ,np.round(probability_to_be_victim_and_female*100,2),"% probability that a customer victim of fraud knowing she's a women ")
###Output
_____no_output_____
###Markdown
Category of transactions with higher chance to be fraudulent ?
###Code
#Creating a pivot table for fraud and category
fraudulent_table = customer_data.pivot_table(index=["category"], columns = "fraud", values = 'customer', aggfunc = 'count',margins = True)
fraudulent_table= fraudulent_table.fillna(0).sort_values(by= 1)
display(fraudulent_table.iloc[:,1])
fraudulent_table.iloc[:-1,1].T.plot(kind='barh')
print (" HO : The variables are independent \n")
print (" H1 : The variables are dependent \n")
stat, p, dof, expected = chi2_contingency(fraudulent_table)
alpha = 0.05
print("p value is " + str(p))
if p <= alpha:
print('Dependent (reject H0)')
else:
print('Independent (H0 holds true)')
probability_sport = customer_data[customer_data['category'] == 'es_sportsandtoys'].shape[0]/ customer_data.shape[0]
probability_to_be_victim = 1982/7200
probability_to_be_victim_and_sport = probability_sport * probability_to_be_victim /(probability_sport * probability_to_be_victim + (1-probability_sport)*(1-probability_to_be_victim))
print("There's a " ,probability_to_be_victim_and_sport,"% probability that a person can be a victim of fraud knowing the transaction was made in sport and toy category ")
e = pd.crosstab(customer_data.step,customer_data.fraud, margins = True)
display(e)
e.iloc[:,1].value_counts()
###Output
_____no_output_____ |
ProyectoFinal_AME.ipynb | ###Markdown
Fuente de informaciónLa fuente de información utilizada en este proyecto es una imagen. Esta imagen debe hallarse en la misma carpeta que el código fuente. La imagen es leída por medio de la biblioteca PIL y se pasa a un array de NumPy que contiene los valores RGB de la foto.
###Code
def fuente_informacion(foto):
'''
La función tiene como objetivo ser la fuente de información.
Se lee una imagen de la computadora para almacenarla en un
array de NumPy donde se tienen las dimensiones de la imagen
y sus canales RGB.
@param foto: Archivo de imagen
return fuente: Vector de pixeles
'''
# Se lee el archivo
fuente = Image.open(foto)
return np.array(fuente) # Se crea un array de pixeles
###Output
_____no_output_____
###Markdown
Codificador de fuenteEste bloque tiene como objetivo recibir la información de la imagen (RGB) y pasarla a bits. Lo que devuelve es una única cadena de bits con la información de los 3 canales de la imagen.
###Code
def pixel_a_bits(array_foto):
'''
Función para convertir los valores de los pixeles a binario
como toman valores de 0 a 255 por el formato RGB, se utilizan
8 bits para la codificación.
@param array_foto: Array de una imagen
return bits_Rx: Vector de bits
'''
# Dimensiones de la imagen
alto, largo, canales = array_foto.shape
# Cantidad de elementos en la imagen (pixeles x canales)
cantidad_elementos = alto * largo * canales
# Se pasa el array a un vector del tamaño (1 x cantidad_elementos)
vector_elementos = np.reshape(array_foto, cantidad_elementos)
# Se pasan los valores a binario, es una lista de strings donde cada elemento consiste en 1 byte.
bits = [format(elemnto, '08b') for elemnto in vector_elementos]
# Se dividen los bytes en bits, para obtener una lista entera de 1s y 0s
bits_Rx = np.array(list(''.join(bits)))
return bits_Rx.astype(int) # La lista se da en formato de int
###Output
_____no_output_____
###Markdown
Codificación canalEste bloque se encarga de separar la cadena de bits recibida por el codificador de fuente y separarla en vectores de tamaño 1 x k. Estos vectores se multiplican por la matriz G para obtener el vector $\overrightarrow{u}$. Este proceso se realiza con el objetivo de agregar más bits que permitan corregir el vector después de pasar por un canal ruidoso. La matriz G utilizada es la siguienta:$$\begin{bmatrix} 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \end{bmatrix}$$
###Code
def codificacion_canal(bits_Tx, matriz_G):
'''
Función de codificación de canal. Esta se encarga de recibir los bits de la codificación
de canal y agregar más con el objetivo de corregir después del error agregado. Para ello,
se hace uso de la matriz Generadora.
@param bits_Tx: Cadena de bits enviada por el codificador de canal
@param matriz_G: Matriz compuesta de la matriz identidad y paridad
return u: Cadena de bits u
'''
# Cantidad de bits en la secuencia
N = len(bits_Tx)
# Se separa en los vectores m (1 x k)
m = np.split(bits_Tx, N/4)
# Se declara la lista de salida (vectores u)
u = []
# Se realiza la multiplicación: u = m * G
for i in range(0, len(m)):
u.append(m[i].dot(matriz_G) % 2)
u = np.concatenate(u, axis=None)
return u.astype(int)
###Output
_____no_output_____
###Markdown
Modulación PAMEl modulador PAM puede ser analizado según su diagrama de bloques. Está compuesto por dos bloques, uno que pasa de bits a símbolos y otro que recibe los símbolos y da una amplitud determinada a un pulso. Se utilizará un modulador PAM de 2 bits, con cuatro símbolos que estarán separados por 2 unidades de forma que la tabla de correspondencia es la siguiente:| Bits | Símbolo || :-: | :-: || 00 | 6 || 01 | 4 || 11 | 2 || 10 | 0 |1. Agrupar b_c(l) en bloques de 2 bits, es decir b = 22. Asingar símbolos a bits en a(n)3. Definir el pulso rectangular y guardar en variable p, Ns = 24 (lista de 24 1s)4. Modular a(n), multiplicando el símbolo por un pulso rectangular5. Multiplicar cada símbolo por el pulso y guardar en variable xn(k)6. x(k) resulta de concatenar todos los xn(k)7. Medir la longitud de la secuencia x(k) y comparar con b_c(l) 8. x(k) debe ser Ns/b veces mayor que b_c(l)
###Code
def modulador_PAM (bc_l):
'''
Función de modulador PAM de 2 bits, recibe bits a transmitir y los modula
con la asignación:
00 - 6
01 - 4
11 - 2
10 - 0
@param bc_l: Secuencia de bits proveniente del codificador de canal.
return x_k: Señal modulada con la información que será enviada por el canal ruidoso.
'''
# Paso 1
agrupacion_bits = np.split(bc_l, len(secuencia_codificada)/2)
# Paso 2
a_n = []
for bits in agrupacion_bits:
if bits[0] == 0:
if bits[1] == 0:
simbolo = 6
else:
simbolo = 4
else:
if bits[1] == 1:
simbolo = 2
else:
simbolo = 0
a_n.append(simbolo)
# Paso 3
p = np.ones(24)
# Paso 4 y 5
arrayOfxn_k = []
for simbolo in a_n:
xn_k = simbolo * p
arrayOfxn_k.append(xn_k)
# Paso 6
x_k = np.concatenate(arrayOfxn_k, axis=None)
return x_k.astype(int)
###Output
_____no_output_____
###Markdown
Modulación ASKEn este caso, se emplea el tipo de modulación ASK. Esta modulación es de tipo digital pasa-banda, que consiste en utilizar una señal portadora como un seno o coseno, que contiene la información para poder ser enviada por un canal. En este caso, la función recibe como parámetros la secuencia de bits proveniente del codificador de canal, la frecuencia de la portada y el tiempo de símbolo. Este último se refiere a cuánto tiempo va a ocupar en la señal un símbolo, que sería la información enviada. En este caso, se utiliza la misma codificación del modulador PAM, donde se utilizan 4 símbolos. Seguidamente, se crea la señal portadora que consiste en un seno. Esta señal portadora se multiplica por el símbolo y se obtiene una amplitud diferente para cada uno de ellos. Por lo tanto, la señal enviada consiste en un seno con la misma frecuencia donde su amplitud va cambiando.
###Code
def modulador_ASK (bc_l, fs, tiempo_simbolo):
'''
Modulador ASK: utiliza un modulador PAM y multiplica por una señal portadora de frecuencia fc.
@param bc_l: Bits del codificador de canal
@param fs: Frecuencia de la portadora
@param tiempo_simbolo: El tiempo de duración de cada símbolo
return x_t: Señal transmitida al canal
return tiempo: Vector de tiempo de la señal
'''
# Modulación de la amplitud
bcT = modulador_PAM(bc_l)
# Muestreo de amplitud
i = 3 # Para muestrear el valor de cada símbolo en el de la posición 4, son 24 valores de amplitud por símbolo
contador_simbolo = 0
simbolos = [0] * (int(np.size(bcT, axis=0)/24)) # 24 porque son 24 valores de amplitud por símbolo
while i < np.size(bcT, axis=0):
simbolos[contador_simbolo] = bcT[i]
contador_simbolo += 1
i += 24
# Definición de señal portadora
tiempo = np.arange(0, tiempo_simbolo, (t_simb*2/fs))# se utiliza frecuencia de muestreo de nyquist ajustada al tiempo de símbolo
c_t = np.sin(2 * np.pi * fs * tiempo)
# Modulación en el tiempo
i = 0
arrayOfx_t = [0] * (len(simbolos))
for simbolo in simbolos:
arrayOfx_t[i] = c_t * simbolo
i += 1
x_t = np.concatenate(arrayOfx_t, axis=None)
tiempo = len(x_t)
print(x_t)
print(type(x_t))
return x_t, tiempo
###Output
_____no_output_____
###Markdown
Canal ruidosoEsta función modela un canal con ruido, como lo existe en todos los medios de comunicación. El ruido que se añade por este canal es por medio de la distribución normal, que se conoce como Ruido Blanco Gaussiano Aditivo, AWGN por sus siglas en inglés. Este ruido se le suma a la señal generada por el modulador PAM>
###Code
import numpy as np
def canal_ruidoso(senal_Tx, snr=0.2):
'''Un bloque que simula un medio de trans-
misión no ideal (ruidoso).
@param senal_Tx: El vector del modulador
return sneal_Rx: La señal modulada al dejar el canal
'''
# Generando ruido auditivo blanco gaussiano
ruido = np.random.normal(0, 1/snr, senal_Tx.shape)
# Señal distorsionada por el canal ruidoso
senal_Rx = senal_Tx + ruido
return senal_Rx
###Output
_____no_output_____
###Markdown
Demodulación PAMEl demodulador PAM digital se puede considerar como una serie de 3 bloques. El primero de ellos es el de muestreo, que tiene por entrada la señal proveniente del canal (con ruido) y cada $T_{sim}$ obtiene una muestra del valor de la señal $y_n^*$. Seguidamente, se tiene el bloque de decisión que revisa los límites de cada símbolo y según el rango en el que se haya la muestra, le asigna uno de los símbolos mostrados en la tabla anterior. Por último, el último bloque recibe una cadena de símbolos que son convertidos a bits según la tabla ya propuesta.1. Se obtienen los valores del pulso de la señal en su tiempo de duración.2. Se promedian los valores.3. Comparar con los símbolos $a_i$ y decidir a cuál corresponde.4. Se pasa la secuencia de símbolos a sus bits correspondientes.5. Se crea una secuencia de bits que será la entrada del decodificador de canal.
###Code
def demodulador_PAM(senal_xR):
'''
Función del bloque demodulador. Este bloque se encarga de recibir la señal
proveniente del canal, muestrear y promediar sus valores para finalmente
asociar los símbolos correspondientes y pasarlos a bits.
@param senal_xR: Señal proveniente del canal, con ruido.
return bits_recuperados: Secuencia de bits correspondientes a la información obtenida del canal.
'''
# 1. Se crea una lista de listas con los valores de muestras para promediarlos
muestras_separadas = np.split(senal_xR, len(senal_xR)/24)
# 2. Se promedian las muestras en cada lista
promedios = []
suma = 0
for list in muestras_separadas:
for muestra in list:
suma = suma + muestra
promedio = suma/24
promedios.append(promedio)
suma = 0
# 3. Con las muestras promediadas, se decida a cuál símbolo se asocia
nuevos_simbolos = []
for prom_simb in promedios:
if prom_simb < 1:
nuevos_simbolos.append(0)
elif prom_simb >= 1 and prom_simb < 3:
nuevos_simbolos.append(2)
elif prom_simb >= 3 and prom_simb < 5:
nuevos_simbolos.append(4)
else:
nuevos_simbolos.append(6)
# 4. Pasamos los símbolos a bits
bits_recuperados = []
for simbolo in nuevos_simbolos:
if simbolo == 0:
bits_recuperados.append([1, 0])
elif simbolo == 2:
bits_recuperados.append([1, 1])
elif simbolo == 4:
bits_recuperados.append([0, 1])
else:
bits_recuperados.append([0, 0])
bits_recuperados = np.concatenate(bits_recuperados, axis=None)
return bits_recuperados.astype(int)
###Output
_____no_output_____
###Markdown
Demodulación ASKEsta demodulación ASK se parece en gran manera a la de PAM. La diferencia radica en el muestreo de la señal recibida. En este caso, como se está trabajando con una señal portadora seno, el muestreo realizado se realiza en los máximos y mínimos. Esto debido a que en estos puntos se halla el valor del símbolo que se envió, por ello se obtiene sus valores, los mínimos (negativos) se pasan a positivo con la función abs() y seguidamente se promedian los valores muestreados. Esta lista de valores promediados son los utilizados para proceder a la decisión de en qué símbolo consiste la información recibida. Los pasos siguientes son iguales a la ya vista demodulación PAM.
###Code
def demod_ASK(senal_xR, cant_simb, t_simb, fs):
'''
Función de demodulación ASK. Realiza un muestreo de la señal, decide que símbolo le asigna y finalmente pasa el símbolo a bits.
@param senal_xR: Señal proveniente del canal ruidoso.
@param cant_simb = Cantidad de símbolos enviados (información)
@param t_simb: Tiempo de duración de cada símbolo.
@param fs: Frecuencia de la portadora.
return bits_recuperados: Los bits obtenidos del canal, después de la adición de ruido. Idealmente se corrigen en el decodificador de canal.
'''
# 1. Se divide la señal recibida en sublistas, donde cada una contiene todos los valores de un símbolo
arrayOfSymbolsValue = np.split(senal_xR, cant_simb)
# 2. Creo el vector tiempo, que es la duración de cada símbolo
tiempo_demod = np.arange(0, t_simb, (t_simb*2/fs))
# 3. Vector de símbolos muestreados
array_simb = []
# 4. Loop que obtiene los valores de los picos de la señal para cada símbolo. Con picos nos referimos
# a los valores máximos y mínimos de la señal recibida. Como la portadora es un seno, los picos se dan
# en pi/2 y 3pi/2.
for senal_simb in arrayOfSymbolsValue:
# Los picos se dan en valores impares de n
n = 1
# Array con los valores obtenidos
muestreo = []
for pos, val in enumerate(tiempo_demod):
# Se revisa cuando el tiempo está en el rango donde se produce un pico
if val > (n/(4*fs))*0.99 and val < (n/(4*fs))*1.01:
# Se guarda en el array el valor de señal correspondiente a ese tiempo
muestreo.append(senal_simb[pos])
n += 2
# Se pasan los valores a positivo y se promedia
muestreo_abs = np.absolute(muestreo)
muestreo_prom = np.mean(muestreo_abs)
array_simb.append(muestreo_prom)
# 5. Con las muestras promediadas, se decida a cuál símbolo se asocia
nuevos_simbolos = []
for prom_simb in array_simb:
if prom_simb < 1:
nuevos_simbolos.append(0)
elif prom_simb >= 1 and prom_simb < 3:
nuevos_simbolos.append(2)
elif prom_simb >= 3 and prom_simb < 5:
nuevos_simbolos.append(4)
else:
nuevos_simbolos.append(6)
# 4. Pasamos los símbolos a bits
bits_recuperados = []
for simbolo in nuevos_simbolos:
if simbolo == 0:
bits_recuperados.append([1, 0])
elif simbolo == 2:
bits_recuperados.append([1, 1])
elif simbolo == 4:
bits_recuperados.append([0, 1])
else:
bits_recuperados.append([0, 0])
bits_recuperados = np.concatenate(bits_recuperados, axis=None)
return bits_recuperados.astype(int)
###Output
_____no_output_____
###Markdown
Decodificación de canalEste bloque recibe los bits después de ser demodulados. Estos bits provenien a su vez de un canal ruidoso, por lo que se espera hallan errores y la idea es corregirlos. Para ello, se utiliza otra matriz conocida por H. Esta matriz proviene de la misma matriz G ya vista, la idea es separar la cadena de bits recibida en vectores de tamaño 1 x u, y mutiplicarlos por la matriz. Con ello, hallar cual bit contiene error y corregirlo. La matriz H es la siguiente:$$\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 1 & 1 &1 \end{bmatrix}$$
###Code
def decodificacion_canal(bits_recibidos, matriz_H):
'''
Esta función tiene como objetivo corregir errores creados por el canal ruidoso. Por lo tanto,
se utiliza una matriz H para hallar en que posición se halla el error y poder corregirlo. Recibe
la cadena de bits del demodulador y la matriz H.
@param bits_recibidos: Cadena de bits enviada por el demodulador.
@param matriz_H: Matriz H para la corrección de errores.
return M_secuencia: Devuelve una cadena menor de bits con la información recuperada y corregida.
'''
# Cantidad de bits en la secuencia
N = len(bits_recibidos)
# Se separa en los vectores v (1 x n)
v = np.split(bits_recibidos, N/8)
# Se declara la lista de síndromes
S = []
# Se realiza la multiplicación v * H = S
for i in range(0, len(v)):
S.append(v[i].dot(matriz_H) % 2)
# Se transforma S en un array de numpy
S = np.array(S)
# Se declara la lista que contiene los vectores Ux
Ux = []
# Se utiliza un loop que revisa el resultado del síndrome y corrige el error según la posición en la matriz.
for sindrome in range(0, np.size(S, 0)):
if (np.all(S[sindrome] == matriz_H[0, :])):
Ux.append((v[sindrome] + [1, 0, 0, 0, 0, 0, 0, 0]) % 2)
elif (np.all(S[sindrome] == matriz_H[1, :])):
Ux.append((v[sindrome] + [0, 1, 0, 0, 0, 0, 0, 0]) % 2)
elif (np.all(S[sindrome] == matriz_H[2, :])):
Ux.append((v[sindrome] + [0, 0, 1, 0, 0, 0, 0, 0]) % 2)
elif (np.all(S[sindrome] == matriz_H[3, :])):
Ux.append((v[sindrome] + [0, 0, 0, 1, 0, 0, 0, 0]) % 2)
elif (np.all(S[sindrome] == matriz_H[4, :])):
Ux.append((v[sindrome] + [0, 0, 0, 0, 1, 0, 0, 0]) % 2)
elif (np.all(S[sindrome] == matriz_H[5, :])):
Ux.append((v[sindrome] + [0, 0, 0, 0, 0, 1, 0, 0]) % 2)
elif (np.all(S[sindrome] == matriz_H[6, :])):
Ux.append((v[sindrome] + [0, 0, 0, 0, 0, 0, 1, 0]) % 2)
elif (np.all(S[sindrome] == matriz_H[7, :])):
Ux.append((v[sindrome] + [0, 0, 0, 0, 0, 0, 0, 1]) % 2)
else:
Ux.append(v[sindrome])
'''
Luego tratar con:
error = zeros(8)
for i in range(0, np.size(matriz_H, 0))
if (np.all(S[sindrome] == matriz_H[i, :])):
error[i] = 1
Ux.append((v[sindrome] + error) % 2)
error = zeros(8)
else:
Ux.append(v[sindrome])
'''
# Se declara la lista de vectores Mx
Mx = []
# Se obtienen los bits provenientes de la matriz identidad
for j in range(0, np.size(Ux, 0)):
Mx.append(np.delete(Ux[j], [0, 1, 2, 3]))
# Se crea la secuencia de salida
M_secuencia = np.concatenate(Mx, axis=None)
return M_secuencia
###Output
_____no_output_____
###Markdown
Decodificador de fuenteUna vez que la cadena de bits ha sido corregida por el decodificador de canal, se pasa al decodificador de fuente. Este bloque tiene como objetivo reconstruir la imagen con la información recuperada. Para ello, se deben pasar las dimensiones de la imagen original y así la función pasa los bits a su valor RGB para reconstruir la imagen original.
###Code
def bits_a_pixel(bits_Rx, dimensiones):
'''
Se tienen los bits de la codificación para reconstruir los valores de
los elementos y los canales de la imagen.
@param bits_Rx: Vector de bits (1 x cantidad_elementos)
@param dimesniones: Dimensiones originales de la imagen
return pixel: Array con los elementos reconstruidos
'''
# Cantidad de bits
N = len(bits_Rx)
# Se separan los elementos en 8 bits (canales)
byte = np.split(bits_Rx, N/8)
# Se unen los 8 bits y se pasan a decimal
elemento_reconstruido = [int(''.join(map(str, muestra_bit)), 2) for muestra_bit in byte]
# Se crea un array con el tamaño original de la imagen
pixel = np.reshape(elemento_reconstruido, dimensiones)
return pixel.astype(np.uint8)
###Output
_____no_output_____
###Markdown
Simulación del sistema con modulación PAMSe realiza la simulación de todo el sistema de comunicación: Codificador de fuente, codificador de canal, modulador, canal ruidoso, demodulador, decodificador de canal y decodificador de fuente. En este caso se utiliza la modulación PAM.
###Code
import numpy as np
# 1. Se obtiene la imagen, que es la información a transimitr.
foto_Tx = fuente_informacion('prueba.jfif')
dimensiones = foto_Tx.shape
# 2. Codificación de fuente (pixels -> bits)
bits_Tx = pixel_a_bits(foto_Tx)
# 3. Codificación de canal (Se preparan las secuencias de bits para futura corrección)
# Matriz paridad (4 x 4)
P = np.array([[0, 1, 1, 0], [1, 1, 0, 0], [0, 1, 0, 1], [1, 1, 1, 1]])
# Matriz identidad (4 x 4)
I = np.identity(4)
# Matriz G (Generadora)
G = np.append(P, I, axis=1)
# Se obtienene la cadena de bits de salida de la codificación de canal
secuencia_codificada = codificacion_canal(bits_Tx, G)
# 4. Modulación (Información enviada como una señal)
# PAM
x_k = modulador_PAM(secuencia_codificada)
# Comprobación de la relación de tamaño teórica.
print("Longitud de x(k): ", len(x_k))
print("Longitud de bc(l): ", len(secuencia_codificada))
Ns = 24
b = 2
razon = Ns/b
print("Según instrucciones, x(k) debe ser {} veces mayor que bc(l)".format(razon))
# Comprobación de la relación de tamaño experimental.
razon_arrays = len(x_k)/len(secuencia_codificada)
print("La razón entre la longitud de las cadenas de bits de salida y entrada es: ", razon_arrays)
# 5. Canal ruidoso. Se agrega ruido blanco a la señal.
senal_xR = canal_ruidoso(x_k)
# 6. Demodulación. Se lee la información recibida del canal y se pasan a los bits de información.
# PAM
bits_demodulados = demodulador_PAM(senal_xR)
# Comprobación de la relación de tamaño teórica.
print('Según instrucciones, $b_c^*(l)$ debe ser {} veces menor que $x^*(k)$'.format(razon))
# Comprobación de la relación de tamaño experimental.
razon_demodulacion = len(senal_xR)/len(bits_demodulados)
print("La razón entre la longitud de la salida del canal y los bits recuperados es: ", razon_demodulacion)
# 7. Decodificación de canal. Se corrigen errores dados por el canal ruidoso.
# Matriz de Hamming
H = np.append(I, P, axis=0)
Mx_secuencia = decodificacion_canal(bits_demodulados, H)
# 8. Decodificación de fuente. (Bits -> pixeles)
foto_Rx = bits_a_pixel(Mx_secuencia, dimensiones)
# 9. Imagen recibida. (Sumidero de información)
nueva_foto = Image.fromarray(foto_Rx)
nueva_foto.save('Salida.jpg')
###Output
Longitud de x(k): 28941696
Longitud de bc(l): 2411808
Según instrucciones, x(k) debe ser 12.0 veces mayor que bc(l)
La razón entre la longitud de las cadenas de bits de salida y entrada es: 12.0
Según instrucciones, $b_c^*(l)$ debe ser 12.0 veces menor que $x^*(k)$
La razón entre la longitud de la salida del canal y los bits recuperados es: 12.0
###Markdown
Visualización de señal antes y después del canal (PAM)En esta parte, se tiene como objetivo visualizar un pedazo de la información enviada por el modulador al canal y la recibida por el demodulado una vez que se agrega ruido a la señal.
###Code
import matplotlib.pyplot as plt
# Visualizar las señales PAM
fig, (ax1, ax2) = plt.subplots(nrows=2, sharex=True, figsize=(14, 7))
# Señal modulada
ax1.plot(x_k[0:600], color='g', lw=2)
ax1.set_ylabel('$s(t)$')
ax1.title.set_text('Señal modulada, entrada al canal')
# Señal modulada después del canal
ax2.plot(senal_xR[0:600], color='b', lw=2)
ax2.set_ylabel('$s(t) + n(t)$')
ax2.title.set_text('Señal de salida del canal. Ruido AWGN.')
ax2.set_xlabel('$t [ms]$')
fig.suptitle('Modulación ASK')
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Simulación ASKSe realiza la simulación de todo el sistema de comunicación: modulador, canal ruidoso, demodulador, decodificador de canal y decodificador de fuente. En este caso se utiliza la modulación ASK de orden 4. En este caso, el codificador de fuente y de canal se utiliza la misma información obtenida de la simulación anterior, ya que serían los mismos bits.
###Code
# 1. Se definen las variables de la modulación
t_simb = 0.04
fs = 1000
# 2. El proceso hasta la codificación de canal es igual. Se empieza con la modulación ASK
modulated_signal_ASK, tiempo = modulador_ASK(secuencia_codificada, fs, t_simb)
# 3. Se pasa la señal modulada por el canal ruidoso
AWGN_signal_ASK = canal_ruidoso(modulated_signal_ASK, snr=1)
# 4. Se procede a la demodulación, donde se tiene que enviar la cantidad de símbolos.
ordenModulador = 2
cant_simb = len(secuencia_codificada)/ordenModulador
demodulated_signal_ASK = demod_ASK(AWGN_signal_ASK, cant_simb, t_simb, fs)
# 5. Se sigue con la decodificación de canal para este nuevo método de modulación.
decoded_signal_ASK = decodificacion_canal(demodulated_signal_ASK, H)
# 6. Se procede con la decodificación de fuente. (bits -> pixeles)
photo_ASK = bits_a_pixel(decoded_signal_ASK, dimensiones)
# 7. Imagen recibida. (Sumidero de información)
new_photo = Image.fromarray(photo_ASK)
new_photo.save('ASK_output_komm.jpg')
###Output
_____no_output_____
###Markdown
Simulación del sistema con ASK utilizando librería Komm.En este caso, se realiza la misma modulación ASK pero utilizando la librería Komm. Esta librería facilita el proceso de modulación y demodulación de las señales. En este caso, para la modulación solo se necesita el array de bits proveniente del codificador de canal. Dentro de la función se realiza el proceso de bits -> símbolo y enviar una señal con amplitud multiplicada por el símbolo. En el proceso de demodulación se realiza igualmente el muestreo necesario, la asignación de símbolos según el valor obtenido (decisión) y finalmente se pasan los símbolos a bits. La documentación de esta librería se halla en: https://komm.readthedocs.io/en/latest/komm.ASKModulation/
###Code
# 1. Se llama a la clase de modulación ASk de la librería Komm
signal_ASK = komm.ASKModulation(4, base_amplitude=2)
# 2. Se utilza la función de modulación de la clase ASK. El parámetro de entrada es la secuencia de bits de salida del codificador de canal.
modulated_signal_ASK = signal_ASK.modulate(secuencia_codificada)
# 3. Se agrega ruido a la señal. El parámetro de entrada es la señal modulada.
AWGN_signal_ASK = canal_ruidoso(modulated_signal_ASK, snr=1)
# 4. Se utiliza la función de demodulación de la clase ASK. El parámetro de entrada es la señal con ruido proveniente del canal.
demodulated_signal_ASK = signal_ASK.demodulate(AWGN_signal_ASK)
# 5. Se sigue con la decodificación de canal para este nuevo método de modulación.
decoded_signal_ASK = decodificacion_canal(demodulated_signal_ASK, H)
# 6. Se procede con la decodificación de fuente. (bits -> pixeles)
photo_ASK = bits_a_pixel(decoded_signal_ASK, dimensiones)
# 7. Imagen recibida. (Sumidero de información)
new_photo = Image.fromarray(photo_ASK)
new_photo.save('ASK_output_komm.jpg')
###Output
_____no_output_____
###Markdown
Visualización de señales con librería Komm.
###Code
# Visualización de señales. Modulación ASK con Komm.
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(nrows=2, sharex=True, figsize=(14, 7))
# Señal modulada
ax1.plot(modulated_signal_ASK[0:600], color='g', lw=2)
ax1.set_ylabel('$s(t)$')
ax1.title.set_text('Señal modulada, entrada al canal')
# Señal con ruido
ax2.plot(AWGN_signal_ASK[0:600], color='b', lw=2)
ax2.set_ylabel('$s(t) + n(t)$')
ax2.set_xlabel('$t [ms]$')
ax2.title.set_text('Señal de salida del canal. Ruido AWGN.')
fig.suptitle('Modulación ASK')
fig.tight_layout()
plt.show()
###Output
C:\Users\Usuario\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\numpy\core\_asarray.py:102: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\Usuario\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\numpy\core\_asarray.py:102: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
|
01-analysing-data.ipynb | ###Markdown
print ('New weight: ', weight_kg * 2.2)
###Code
print ('New weight: ', weight_kg * 2.2)
%whos
data = numpy.loadtxt(fname='data/data/weather-01.csv',delimiter = ',')
print (data)
print (type(data))
%whos
# Finding out the data type
print (data.dtype)
# Finf out the shape
print (data.shape)
# This is 60 rows * 40 columns
# Getting a single number out of the array
print ("First value in data: ", data [0,0])
print('A middle value: ', data[30,20])
# Lets get the first 10 columns for the first 4 rows
print(data[0:4, 0:10])
# Lets get the first 10 columns for the first 4 rows
print(data[0:4, 0:10])
# Lets get the first 10 columns for the first 4 rows
print(data[0:4, 0:10])
# Start at index 0 and go up to BUT NOT INCLUDING index 4
# We don't need to start slicing at 0
# We don't need to start slicing at 0
# We don't need to start slicing at 0
print (data[5:10, 7:15])
# We don't even need to include the UPPER and LOWER bounds
smallchunk = data [:3, 36:]
# We don't even need to include the UPPER and LOWER bounds
smallchunk = data [:3, 36:]
# We don't even need to include the UPPER and LOWER bounds
smallchunk = data [:3, 36:]
print(smallchunk)
#Arithmetic on arrays
doublesmallchunk = smallchunk * 2.0
print(doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print (triplesmallchunk)
print (numpy.mean(data))
print (numpy.min(data))
# Get a set of data for the first station
# Get a set of data for the first station
# Get a set of data for the first station
station_0 = data [0,:]
print(numpy.max(station_0))
# We don't need to create 'temporary' array slices
# We don't need to create 'temporary' array slices
# We don't need to create 'temporary' array slices
# We can refere to what we call array axes
# axis = 0 gets the mean Down each column, so the mean temperature
# for recording period
print (numpy.mean(data,axis = 0))
# axis = 0 gets the mean Down each column, so the mean temperature
# for each station for all the periods
print (numpy.mean(data,axis = 1))
# Do some simple visualisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
# Let's look at the average temperature over time
avg_temperature = numpy.mean(data,axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
Task:
# Task
* Produce maximum and minimum plots of this data
* What do you think?
max_temperature = numpy.max(data,axis = 0)
min_temperature = numpy.min(data,axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
max_plot = matplotlib.pyplot.plot(max_temperature)
min_plot = matplotlib.pyplot.plot(min_temperature)
###Output
_____no_output_____
###Markdown
Analysing tabular data We are going to use a LIBRARY called numpy
###Code
import numpy
numpy.loadtxt (fname= 'data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg = 55
print (weight_kg)
print ('weight in pounds: ', weight_kg * 2.2)
weight_kg = 57.5
print ('new weight:', weight_kg * 2.2)
%whos
data = numpy.loadtxt (fname= 'data/weather-01.csv', delimiter = ',')
print (data)
print (type(data))
%whos
# Finding out the data type
print (data.dtype)
# Find out the shape
print (data.shape)
# This is 60 rows * 40 columns
# Getting a single number out of the array
print ("First value in data: ", data [0,0])
print ('A middle value: ', data[30, 20])
# Lets get the first 10 columns for the first 4 rows
print(data[0:4, 0:10])
# start at index 0 and go up to but not including index 4
# We don't need to start slicing at 0
print (data [5:10, 7:15])
# We don't even need to include the upper and lower bounds - automatically assumes the start or end.
smallchunk = data [:3, 36:]
print (smallchunk)
# Arithmetic on arrays
doublesmallchunk = smallchunk * 2.0
print (doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print (triplesmallchunk)
print (numpy.mean(data))
print (numpy.max(data))
print (numpy.min(data))
# Get a set of data for the first station, so first row and all columns
station_0 = data [0, :]
print (numpy.max(station_0))
# we don't need to create 'temporary' array slices
# we can refer to what we call array axes
print (numpy.mean(data, axis = 0))
# axis = 0 gets the mean down each column, so the mean temperature for each recording period.
print (numpy.mean(data, axis = 1))
# axis = 1 gets the mean across each row, so the mean for each station for all periods.
# Do some simple visualisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow (data)
# Let's look at the average temperature over time
avg_temperature = numpy.mean(data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
# Try to make minumum plot.
min_temperature = numpy.min(data, axis = 0)
min_plot = matplotlib.pyplot.plot(min_temperature)
# Try to do maximum plot.
max_temperature = numpy.max(data, axis = 0)
max_plot = matplotlib.pyplot.plot(max_temperature)
###Output
_____no_output_____
###Markdown
Analysing tabular data We are going to use a LIBRARY called numpy We are going to use a LIBRARY called numpy
###Code
import numpy
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg = 55
print (weight_kg)
print ('Weight in pounds: ', weight_kg * 2.2)
weight_kg = 57.5
print ('New weight: ', weight_kg * 2.2)
%whos
data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
print (data)
print (type(data))
%whos
# Finding out the data type
print(data.dtype)
# Find out the shape
print (data.shape)
# This is 60 rows * 40 columns
# Getting a single number out of the array
print("First value in data: " , data[0,0])
print('A middle value: ' , [30,20])
# Lets get the first 10 columns for the first 4 rows
print (data[0:4, 0:10])
#Start at index 0 and go up to BUT NOT INCLUDING index 4
# We don't need to start slicing at 0
print(data [5:10, 7:15])
# We don't even need to include the UPPER and LOWE bounds
smallchunk = data [:3, 36:]
print (smallchunk)
# Arithmetic on arrays
doublesmallchunk = smallchunk * 2.0
print (doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
###Output
_____no_output_____
###Markdown
print (triplesmallchunk)
###Code
print (triplesmallchunk)
print (numpy.mean(data))
print (numpy.max(data))
print (numpy.min(data))
# Get a set of data for the first station
station_0 = data [0, :]
print (numpy.max(station_0))
# We don't need to create 'temporaty' array slices
# We can refer to what we call array axes
# axis = 0 gets the mean DOWN each column , so the mean temperature for each recording period
print (numpy.mean(data, axis = 0))
# axis = 0 gets the mean DOWN each column , so the mean temperature for each station for all the periods
print (numpy.mean (data, axis = 1))
# Do some simple visualisation
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
# Let's look the average temperature over time
avg_temperature = numpy.mean (data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
###Output
_____no_output_____
###Markdown
Task:* Produce maximum and minimum plots of this data* What do you think?
###Code
max_temperature = numpy.max (data, axis = 0)
min_temperature = numpy.min (data, axis = 0)
max_plot = matplotlib.pyplot.plot(max_temperature)
min_plot = matplotlib.pyplot.plot(min_temperature)
###Output
_____no_output_____
###Markdown
Analysing tabular data We are going to use a LIBRARY called nampy
###Code
import numpy
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg=55
print [weight_kg]
print (weight_kg)
print ('weight in pounds: ', weight_kg * 2.2 )
weight_kg=57.5
print ('New weight: ', weight_kg * 2.2)
%whos
data=numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
print (data)
print(type(data))
#Find out the data type
print (data.dtype)
#find out the shape
print (data.shape)
#This is 60 rows * 40 columns
#Getting a single number out of the array
print ("First value in data: ", data [0, 0])
print ('A middle value: ', data[30, 20])
# Letsget the 1st 10 columns for the first 4 rows
print(data[0:4, 0:10])
# Start at index 0 and go up to BUT NOT INCLUDING index 4
#We don't need to start slicingf at 0
print (data[5:10, 7:15])
#We don't even need to include the UPPER or LOWER bounds
smallchunck = data [:3, 36:]
print (smallchunck)
#Arithmetic on arrays
doublesmallchunck = smallchunck * 2.0
print (doublesmallchunck)
triplesmallchunck = smallchunck + doublesmallchunck
print(triplesmallchunck)
print (numpy.mean(data))
print (numpy.max(data))
print (numpy.min(data))
#Get a set of data for the first station 0 means everyting from the first row and : means all the columns
station_0 = data [0, :]
print (station_0)
print (numpy.max(station_0))
#We don't need to create 'temporary' array slices
#We can refer to what we call array axes
# axis = 0 gets the mean down each column, so the mean temperature for each recording period
print (numpy.mean(data, axis = 0))
# axis = 1 gets the mean ACROSS each row, so the mean temperature for each station for all the periods
print (numpy.mean(data, axis = 1))
# Do some simple visualisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
#Let's look at the average temperature over time
avg_temperature = numpy.means(data, axis = 0)
#Let's look at the average temperature over time
avg_temperature = numpy.mean(data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg.temperature)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
###Output
_____no_output_____
###Markdown
Task Produce maximum and minimum plots of this dataWhat do you think?
###Code
max_temperature = numpy.max (data, axis=0)
max_temperature = numpy.max (data, axis=0)
min_temperature = numpy.min (data, axis=0)
max_plot = matplotlib.pyplot.plot(max_temperature)
min_plot = matplotlib.pyplot.plot(min_temperature)
###Output
_____no_output_____
###Markdown
Analysing tabular data We are going to use a LIBRARY called numpy
###Code
import numpy
numpy.loadtxt(fname = 'data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg = 55
print(weight_kg)
print('Weight in pounds: ', weight_kg * 2.2)
weight_kg = 57
weight_kg = 57.5
print('New weight: ', weight_kg * 2.2)
%whos
data = numpy.loadtxt(fname = 'data/weather-01.csv', delimiter = ',')
data
print(type(data))
type(data)
%whos
# Finding out the data type
print(data.dtype)
# Find out the shape
print(data.shape)
# This is 60 rows by(*) 40 columns
# Getting a single number out of the array
print ("First value in data:", data [0, 0]) # numbers start at 0, unlike 1 for R - so first row is 0
print ('A middle value: ', data[30, 20])
# Lets get the first 10 columns for the first 4 rows
print(data[0:4, 0:10]) # start at 0 and go up to, but dont include 4 (so 0:3, in R would be 1:4)
# dont have to start slicing at 0
print(data[5:10, 7:15])
# Dont even need to include the UPPER and LOWER bounds
smallchunk = data[:3, 36:]
print(smallchunk)
# Arithmetic on arrays
doublesmallchunk = smallchunk * 2.0
print(doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print(triplesmallchunk)
print (numpy.mean(data))
numpy.mean(data)
print (numpy.max(data))
print(numpy.min(data))
# Get a set of data for the first station (data set is columns (time intervals) and rows (weather stations))
station_0 = data[0, :] # can put just : for all columns
print(numpy.max(station_0))
# We dont need to create 'temporary' array slices
# We can refer to what we call araay axes
print(numpy.mean(data, axis = 1))
# do some simple visualisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
# Lets look at the average temperature over time
avg_temperature = numpy.mean(data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
#Task: produce max and minimum plots
max_temp = numpy.max(data, axis = 0)
min_temp = numpy.min(data, axis = 0)
min_plot = matplotlib.pyplot.plot(min_temp)
max_plot = matplotlib.pyplot.plot(max_temp)
min_plot = matplotlib.pyplot.plot(min_temp)
max_plot = matplotlib.pyplot.plot(max_temp)
###Output
_____no_output_____
###Markdown
Analysing tabular data We are going to use a LIBRARY called numpy
###Code
import numpy
numpy.loadtxt(fname = 'data/weather-01.csv',delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg = 55
print (weight_kg)
print ('Weight in pounds:', weight_kg * 2.2)
weight_kg = 57.5
print ('New Weight: ',weight_kg*2.2)
%whos
data = numpy.loadtxt(fname = 'data/weather-01.csv',delimiter = ',')
print (data)
print(type(data))
%whos
# Finding out the data type
print (data.dtype)
# Find out the shape
print(data.shape)
# This is 60 rows by 40 columns
# getting a single number out of the array
print ("First value in data: ", data [0, 0])
print ('a middle value: ', data[30, 20])
# Lets get the first 10 columns for the first 4 rows
print (data[0:4,0:10])
# index says start at 0 and go up to but dont include 4
# we dont need to start sices at zero
print (data [5:10,7:15])
# we dont even need to include the UPPER and LOWER bounds
smallchunk = data [:3,36:]
print(smallchunk)
# aritmetic on arrays
doublesmallchunk = smallchunk * 2.0
print(doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print(triplesmallchunk)
print(numpy.mean(data))
print(numpy.mean(triplesmallchunk))
print(numpy.max(data))
print(numpy.min(data))
# get a set of data from the first station
station_0 = data [0, :]
print (numpy.max(station_0))
# We dont need to create 'temporary' array slices
# We can refer to what we call array axes
# axis = 0 gets the mean down each column , so the mean
# temperature for each recording period
print(numpy.mean(data, axis = 0))
# axis = 0 gets the mean across each column , so the mean
# temperature for each station for all periods
print(numpy.mean(data, axis =1))
# Do some simple visulisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
# let's look at the average temperature over time
avg_temperature = numpy.mean(data,axis =0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
min_temperature = numpy.min(data,axis = 0)
max_temperature = numpy.max(data,axis = 0)
min_plot = matplotlib.pyplot.plot(min_temperature)
max_plot = matplotlib.pyplot.plot(max_temperature)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
max_plot = matplotlib.pyplot.plot(max_temperature)
min_plot = matplotlib.pyplot.plot(min_temperature)
###Output
_____no_output_____
###Markdown
Analysing tabublar data we are going to use a LIBRARY called numpy
###Code
import numpy
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg = 55
print (weight_kg)
print ('Weight in pounds:', weight_kg * 2.2)
weight_kg = 57.5
print ('Weight in pounds:', weight_kg * 2.2)
%whos
data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
print (data)
print (type(data))
%whos
# Finding out the data type
print (data.dtype)
# Find out the shape
print (data.shape)
# This is 60 rows *40 columns
# Getting a single number out of the array
print ("First value in data:", data [0,0])
print ('a middle value:', data [30,20])
# Lets get the first 10 columns for the first 4 rows
print (data [0:4, 0:10])
# start at index 0 and go upt to But not including index 4
# We don`t neet to start slicing at 0
print (data[5:10, 7:15])
# We dont even need to include the UPPER and LOWER bounds
smallchunk = data [:3, 36:]
print (smallchunk)
# Arithmetic on arrays
doublesmallchunk = smallchunk * 2.0
print (doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print (triplesmallchunk)
print (numpy.mean (data))
print (numpy.max(data))
print (numpy.min(data))
# Get a set of data for the first station
station_0 = data [0, :]
print (numpy.max (station_0))
# We don't need to create 'temporary' array slices
# We can refer to what we call array axes
# Axis = 0 gets the mean Down each column, so the mean temperatura for each recording period
print (numpy.mean (data, axis = 0))
# Axis = 1 gets the mean Across each row, so the mean temperatura for each recording period
print (numpy.mean (data, axis = 1))
# Do some simple visualisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
# Let's look at the average temperature over time
avg_temperature = numpy.mean (data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
###Output
_____no_output_____
###Markdown
Task: - Produce maximum and minimum plots of this data - What do you think?
###Code
Max_plot = matplotlib.pyplot.plot (numpy.max(data, axis =0))
Min_plot = matplotlib.pyplot.plot (numpy.min(data, axis =0))
max_temp = numpy.max (data, axis=0)
min_temp = numpy.min (data, axis=0)
max_plot = matplotlib.pyplot.plot (max_temp)
min_plot = matplotlib.pyplot.plot (min_temp)
###Output
_____no_output_____
###Markdown
Analysing tabular data We are going to use a LIBRARY called numpy
###Code
import numpy
numpy.loadtxt(fname='data/data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg = 55
print (weight_kg)
print ('Weight in pounds: ', weight_kg*2.2)
weight_kg = 57.5
print ('New weight: ', weight_kg*2.2)
%whos
data = numpy.loadtxt(fname='data/data/weather-01.csv', delimiter = ',')
print (data)
print (type(data))
%whos
#Finding out the data type
print (data.dtype)
#Find out the shape
print (data.shape)
#This is 60 rows * 40 columns
#Getting a single number out of the array
print ("First value in data: ", data[0,0])
print ("A middle value: ", data[30,20])
#slicing - lets get the first 10 columns for the first four rows
print (data[0:4,0:10])
#start at index 0 and go up to but not including index 4 (/10)
#we don't need to start at 0:
print(data[5:10,7:15])
#rows 6 to 10 and columns 6 to 15
#we don't even need the UPPER or LOWER bounds, will assume start or end
print (data[:4,:10])
smallchunk = data [:3,36:]
print (smallchunk)
#arithmetic on arrays
doublesmallchunk = smallchunk * 2.0
print (doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print (triplesmallchunk)
print (numpy.mean(data))
print (numpy.min(data))
print (numpy.max(data))
#Get a set of data for the first station
station_0 = data[0,:]
print(numpy.max(station_0))
#we don't need to create 'temporary' array slices
print(numpy.max(data[0,:]))
#we could refer to 'array axes'
print (numpy.mean(data, axis = 0))
#average for columns axis = 0. for rows, axis = 1
#in this case columns are time periods, rows are weather stations. Here we have averages for each time period.
print (numpy.mean(data, axis = 1))
#Here we have averages for each station
#display of data - simple visualisation
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
#Lets look at averge temperature over time
avg_temp = numpy.mean(data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temp)
max_temp = numpy.max(data, axis = 0)
max_plot = matplotlib.pyplot.plot(max_temp)
min_temp = numpy.min(data, axis = 0)
min_plot = matplotlib.pyplot.plot(min_temp)
###Output
_____no_output_____
###Markdown
Analysing tabular data We are going to use a LIBRARY called numpy
###Code
import numpy
numpy.loadtxt(fname='data/data/weather-01.csv', delimiter = ',')
!ls
!ls data
!ls data/data
###Output
small-01.csv
small-02.csv
small-03.csv
weather-01.csv
weather-02.csv
weather-03.csv
weather-04.csv
weather-05.csv
weather-06.csv
weather-07.csv
weather-08.csv
weather-09.csv
weather-10.csv
weather-11.csv
weather-12.csv
###Markdown
Variables
###Code
weight_kg = 55
print (weight_kg)
print('Weight in pounds:', weight_kg*2.2)
weight_kg = 57.5
print ('New weight:', weight_kg * 2.2)
%whos
data = numpy.loadtxt(fname='data/data/weather-01.csv', delimiter = ',')
print (data)
print (type(data))
%whos
# Finding out the data type
print(data.dtype)
# Find out the shape
print(data.shape)
# This is 60 rows * 40 columns
# Getting a single number out of the array
print ("First value in data:",data[0,0])
print ('A middle value:', data[30,20])
# Lets get the first 10 columns for the first 4 rows
print(data[0:4, 0:10])
# Start at index 0 and go up to BUT NOT INCLUDING index 4
# We don't need to start slicing at 0
print (data[5:10, 7:15])
# We don't even need to include the UPPER AND LOWER bounds
smallchunk= data[:3, 36:]
print (smallchunk)
# Arithmetic on array
doublesmallchunk = small chunk * 2.0
doublesmallchunk = smallchunk * 2.0
print doublesmallchunk
print(doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print (triplesmallchunk)
print (numpy.mean(data))
print (numpy.max(data))
print (numpy.min(data))
# Get a set of data for the first station
station_0 = data [0, :]
print (numpy.max(station_0))
# We don't neet to create 'temporary' array slices
# We can refert to what we call array axes
print (numpy.mean(data, axis = 0)
# axis = 0 gets the mean DOWN each column, so the mean temperatures for each recording time period
print (numpy.mean(data, axis = 0))
# axis = 1 gets the mean ACROSS each row, so the mean temperature for each station across all time periods
print (numpy.mean(data, axis =1))
# Do some simple visualisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
# let's take a look at the average temperature over time
avg_temperature = numpy.mean(data, axis 0)
avg_temperature = numpy.mean(data, axis = 0)
avg_plot= matplotlib.pyplot.plot(avg_temperature)
###Output
_____no_output_____
###Markdown
Task: produce max and min plots of this data - conclusions
###Code
max_temperature = numpy.max(data, axis =0)
print max_temperature
print (max_temperature)
min_temperature = numpy.min(data, axis = 0)
max_temperature plot= matplotlib.pyplot.plot(max_temperature)
max_plot = matplotlib.pyplot.plot(max_temperature)
min_plot = matplotlib.pyplot.plot(min_temperature)
max_plot = matplotlib.pyplot.plot(max_temperature)
min_plot = matplotlib.pyplot.plot(min_temperature)
avg_plot= matplotlib.pyplot.plot(avg_temperature)
###Output
_____no_output_____
###Markdown
Analysing tabular data We are going to use a LIBRARY called numpy
###Code
import numpy
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg = 55
print (weight_kg)
print ('Weight in pounds: ', weight_kg *2.2)
weight_kg = 57.5
print ('Weight in pounds: ', weight_kg *2.2)
%whos
data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
print (data)
print(type(data))
%whos
# Finding out the data type
print (data.dtype)
# Finding out the shape
print (data.shape)
# This is 60 rows * 40 columns
# Getting a number out of the array
print ("First value in data: ", data [0,0])
print ("A value from a selected row and column position: ", data[30,20])
#Lets get the first 10 columns for the first 4 rows
# notation means start at X and go up to but not including Y [X:Y]
print (data[0:4, 0:10])
# can start slicing anywhere
print (data[3:8, 4:7])
#Don't need to include the upper and lower bounds, uses 0 instead or end
smallchunk= data[:3,36:]
print(smallchunk)
# Arithmetic with arrays
doublesmallchunk = smallchunk *2.0
print(doublesmallchunk)
triplesmallchunk = smallchunk+doublesmallchunk
print(triplesmallchunk)
print(numpy.mean(data))
print(numpy.max(data))
print(numpy.min(data))
# Get a set of data for the first station
station_0 = data[0, :]
print (numpy.max(station_0))
# We don't need to creat these 'temporary' array slices
# We can refer to what we call array axes
print(numpy.mean(data, axis = 0))
print(numpy.mean(data, axis = 1))
# axis = 0 means calculate down each column (i.e. mean of the values in a column)
# axis = 1 means calculate mean across the rows (i.e. mean of the values in a row)
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
# Let's look at the average temperature over time
avg_temperature = numpy.mean(data, axis = 0)
avg_plot = matplotlib.pyplot.plot (avg_temperature)
# Plot min temperature over time
min_temperature = numpy.min(data, axis=0)
min_plot = matplotlib.pyplot.plot(min_temperature)
# plot max temperautres
max_temperature = numpy.max(data, axis =0)
max_plot = matplotlib.pyplot.plot(max_temperature)
###Output
_____no_output_____
###Markdown
analysig tabular data we are going to use a library called numpy
###Code
import numpy
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
variables
###Code
weight_kg=55
print (weight_kg)
print ('Weight in pounds: ', weight_kg*2.2)
weight_kg=57.5
print ('New weight:', weight_kg*2.2)
%whos
data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
print (data
)
print (type(data
))
%whos
# finding out the data type
print (data.dtype)
#find out the shape
print (data.shape)
# this is 60 rows * 40 columns
#getting a single number out of the array
print ("First value in data:", data[0,0])
print ('A middle value:', data [30,20])
#get a slice out of an array
#lets get the first 10 column for the first 4 rows
print(data[0:4, 0:10
])
# start at index 0 and go up to BUT NOT INCLUDING index 4
#we don t need to start slicing at 0
print (data[5:10, 7:15])
# we don t need to include the upper and lower bounds (it automaticlly assumes the begining and the end respectively , in the example above 5 and 7 respectively)
smallchunk = data[:3, 36:]
print (smallchunk
)
#arithmetic on arrays
doublesmallchunk = smallchunk*2.0
print(doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print (triplesmallchunk)
print (numpy.mean(data))
print (numpy.max(data))
print (numpy.min(data))
# do stuff dow column or across rows
#get a set of data for the first station
station_0 = data[0, :]
# everything for row 0, all the columns for row 0
print station_0
print (numpy.max(station_0))
#we don t need to create 'temporary'arrat slices
#we can refer to what we call array axes
print (numpy.mean(data, axis=0))
print (numpy.mean(data, axis=1))
# axis= 0 is the mean down each column so the mean teperature for each recording period
# axis=1 we get the mean down each row, so the mean temperature for each station for all the periods
#do some simple visualisations
import matplotlib.pyplot
%matplotlib inline
image= matplotlib.pyplot.imshow(data)
#let's look at the average temp over time
avg_temperature= numpy.mean(data, axis= 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
min_temperature= numpy.min(data, axis=0)
import numpy
min_plot=matplotlib.pyplot.plot(min_temperature)
max_temperature= numpy.max(data, axis=0)
max_plot= matplotlib.pyplot.plot (max_temperature)
###Output
_____no_output_____
###Markdown
analysing tabular data
###Code
import numpy
numpy.loadtxt
numpy.loadtxt(fname='data/weather-01.csv' delimiter = ',')
numpy.loadtxt(fname='data/weather-01.csv'delimiter=',')
numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
###Output
_____no_output_____
###Markdown
variables
###Code
weight_kg=55
print (weight_kg)
print('weight in pounds:',weight_kg*2.2)
numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
%whos
data=numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
%whos
%whos
print(data.dtype)
print(data.shape)
###Output
_____no_output_____
###Markdown
this is 60 by 40
###Code
print ("first value in data:",data [0,0])
print ('A middle value:',data[30,20])
###Output
_____no_output_____
###Markdown
lets get the first 10 columns for the firsst 4 rows print(data[0:4, 0:10]) start at index 0 and go up to but not including index 4
###Code
print (data[0:4, 0:10])
###Output
_____no_output_____
###Markdown
we dont need to start slicng at 0
###Code
print (data[5:10,7:15])
###Output
_____no_output_____
###Markdown
we dont even need to inc upper and lower limits
###Code
smallchunk=data[:3,36:]
print(smallchunk)
###Output
_____no_output_____
###Markdown
arithmetic on arrays
###Code
doublesmallchunk=smallchunk*2.0
print(doublesmallchunk)
triplesmallchunk=smallchunk+doublesmallchunk
print(triplesmallchunk)
print(numpy.mean(data))
print (numpy.max(data))
print (numpy.min(data))
###Output
_____no_output_____
###Markdown
get a set of data for the first stationthis is shorthand for "all the columns"
###Code
station_0=data[0,:]
print(numpy.max(station_0))
###Output
_____no_output_____
###Markdown
we dont need to create @temporary@ array sliceswe can refer to what we call array axes
###Code
print(numpy.mean(data, axis=0))
print(numpy.mean(data, axis=1))
###Output
_____no_output_____
###Markdown
axis = 0 gets mean down eaach columnaxis=1 gets the mean across each row so the mean tempfor each station for all periodssee above do some simple vissualisations
###Code
import matplotlib.pyplot
%matplotlib inline
image=matplotlib.pyplot.imshow(data)
###Output
_____no_output_____
###Markdown
lets look at the average tempp over time
###Code
avg_temperature=numpy.mean(data,axis=0)
avg_plot=matplotlib.pyplot.plot(avg_temperature)
import numpy
import matplotlib.pyplot
%matplotlib inline
data=numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
###Output
_____no_output_____
###Markdown
create a wide figure to hold sub plots
###Code
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
###Output
_____no_output_____
###Markdown
create placeholders for plots
###Code
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
subplot1=fig.add_subplot (1,3,1)
subplot2=fig.add_subplot (1,3,2)
subplot3=fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis=0))
subplot2.set_ylabel('minimum')
subplot2.plot(numpy.min(data, axis=0))
subplot3.set_ylabel('maximum')
subplot3.plot(numpy.max(data, axis=0))
###Output
_____no_output_____
###Markdown
this is fine for small numbers of datasets, what if wwe have hundreds or thousands? we need more automaation loops
###Code
word='notebook'
print (word[4])
###Output
_____no_output_____
###Markdown
see aabove note diff between squaare and normaal brackets
###Code
for char in word:
# colon before word or indentation v imporetaant
#indent is 4 spaces
for char in word:
print (char)
###Output
_____no_output_____
###Markdown
reading filenames get a list of all the filenames from disk
###Code
import glob
###Output
_____no_output_____
###Markdown
global..something~
###Code
print(glob.glob('data/weather*.csv'))
###Output
_____no_output_____
###Markdown
putting it all together
###Code
filenames=sorted(glob.glob('data/weather*.csv'))
filenames=filenames[0:3]
for f in filenames:
print (f)
data=numpy.loadtxt(fname=f, delimiter=',')
#next bits need indenting
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
subplot1=fig.add_subplot (1,3,1)
subplot2=fig.add_subplot (1,3,2)
subplot3=fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis=0))
subplot2.set_ylabel('minimum')
subplot2.plot(numpy.min(data, axis=0))
subplot3.set_ylabel('maximum')
subplot3.plot(numpy.max(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show
num=37
if num>100:
print('greater')
else:
print('not greater')
print ('done')
num=107
if num>100:
print('greater')
else:
print('not greater')
print ('done')
###Output
_____no_output_____
###Markdown
didnt print "done" due to break in indentation sequence
###Code
num=-3
if num>0:
print (num, "is positive")
elif num ==0:
print (num, "is zero")
else:
print (num, "is negative")
###Output
_____no_output_____
###Markdown
elif eqauls else if, always good to finish a chain with an else
###Code
filenames=sorted(glob.glob('data/weather*.csv'))
filenames=sorted(glob.glob('data/weather*.csv'))
filenames=filenames[0:3]
for f in filenames:
print (f)
data=numpy.loadtxt(fname=f, delimiter=',') == 0
if numpy.max (data, axis=0)[0] ==0 and numpy.max (data, axis=0)[20] ==20:
print ('suspicious looking maxima')
elif numpy.sum(numpy.min(data, axis=0)) ==0:
print ('minimum adds to zero')
else:
print ('data looks ok')
#next bits need indenting
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
subplot1=fig.add_subplot (1,3,1)
subplot2=fig.add_subplot (1,3,2)
subplot3=fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis=0))
subplot2.set_ylabel('minimum')
subplot2.plot(numpy.min(data, axis=0))
subplot3.set_ylabel('maximum')
subplot3.plot(numpy.max(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show
###Output
_____no_output_____
###Markdown
something went wrong with the above
###Code
def fahr_to_kelvin(temp):
return((temp-32)*(5/9)+ 273.15)
print ('freezing point of water:', fahr_to_kelvin(32))
print ('boiling point of water:', fahr_to_kelvin(212))
###Output
_____no_output_____
###Markdown
using functions
###Code
def analyse (filename):
data=numpy.loadtxt(fname=filename,)......
###Output
_____no_output_____
###Markdown
unfinsinshed
###Code
def detect_problems (filename):
data=numpy.loadtxt(fname=filename, delimiter=',')
if numpy.max (data, axis=0)[0] ==0 and numpy.max (data, axis=0)[20] ==20:
print ('suspicious looking maxima')
elif numpy.sum(numpy.min(data, axis=0)) ==0:
print ('minimum adds to zero')
else:
print ('data looks ok')
for f in filenames [0:5]:
print (f)
analyse (f)
detect_problems (f)
def analyse (filename):
data=numpy.loadtxt(fname=filename,delimiter=',')
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
subplot1=fig.add_subplot (1,3,1)
subplot2=fig.add_subplot (1,3,2)
subplot3=fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis=0))
subplot2.set_ylabel('minimum')
subplot2.plot(numpy.min(data, axis=0))
subplot3.set_ylabel('maximum')
subplot3.plot(numpy.max(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show
for f in filenames [0:5]:
print (f)
analyse (f)
detect_problems (f)
help(numpy.loadtxt)
help(detect_problems)
"""some of our temperature files haave problems, check for these
this function reads a file and reports on odd looking maxima and minimia that add to zero
the function does not return any data
"""
def detect_problems (filename):
data=numpy.loadtxt(fname=filename, delimiter=',')
if numpy.max (data, axis=0)[0] ==0 and numpy.max (data, axis=0)[20] ==20:
print ('suspicious looking maxima')
elif numpy.sum(numpy.min(data, axis=0)) ==0:
print ('minimum adds to zero')
else:
print ('data looks ok')
def analyse (filename):
data=numpy.loadtxt(fname=filename,delimiter=',')
""" this function analyses a dataset and outputs plots for maax min and ave
"""
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
subplot1=fig.add_subplot (1,3,1)
subplot2=fig.add_subplot (1,3,2)
subplot3=fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis=0))
subplot2.set_ylabel('minimum')
subplot2.plot(numpy.min(data, axis=0))
subplot3.set_ylabel('maximum')
subplot3.plot(numpy.max(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show
###Output
_____no_output_____
###Markdown
Analysing tabular data We are going to use a LIBRARY cally numpy
###Code
import numpy
numpy.loadtxt(fname='data/weather-01.csv', delimter = ',')
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
numpy.loadtxt(fname='data/data/weather-01.csv', delimiter = ',')
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg = 55
print (weight_kg)
print ('Weight in pounds: ', weight_kg * 2.2)
weight_kg = 57.5
print ('New weight: ', weight_kg * 2.2)
%whos
data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
print (data)
print (type(data))
%whos
# Finding out the data type
print (data.dtype)
# Find out the shape
print (data.shape)
# This is 60 rows * 40 columns
# Getting a single number out of the array
print ("First value in data: ", data [0, 0])
print ('A middle value: ', data[30, 20])
# Lets get the first 10 columns for the first 4 rows
print (data[0:4, 0:10])
# Start at index 0 and go up to BUT NOT INCLUDING index 4
# We don't need tot start slicing at 0
print (data [5:10, 7:15])
# We don't even need to include the UPPER and LOWER bounds
smallchunk = data[:3, 36:]
print(smallchunk)
# Arithmetic on arrays
doublesmallchunk = smallchunk * 2.0
print (doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print(triplesmallchunk)
print (numpy.mean(data))
print(numpy.max(data))
print(numpy.min(data))
# Get a set of data for the first station
station_0 = data [0, :]
print (numpy.max(station_0))
# We don't need to create 'temporary' array slices
# We can refer to what we call array axes
print (numpy.mean(data, axis = 0))
print (numpy.mean(data, axis = 1))
print(data)
print(data[:, 0])
print(data[:, 1])
print(data[0, :])
# axis = 0 gets the mean DOWN each column, so the mean temperature for each recording period
print (numpy.mean(data, axis = 0))
# axis = 1 gets the mean ACROSS each row, so the mean temperature for each station for all the periods
print (numpy.mean(data, axis = 1))
# Do some simple visualisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
# Let's look at the average temperature over time
avg_temperature = numpy.mean(data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
###Output
_____no_output_____
###Markdown
Task: - Produce maximum and minimum plots of this data - What do you think?
###Code
min_temperature = numpy.min(data, axis = 0)
import matplotlib.pyplot
import numpy
min_temperature = numpy.min(data, axis = 0)
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
min_temperature = numpy.min(data, axis = 0)
data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
min_temperature = numpy.min(data, axis = 0)
max_temperature = numpy.max(data, axis = 0)
min_plot = matplotlib.pyplot.plot(min_temperature)
print (min_plot)
%matplotlib inline
min_plot = matplotlib.pyplot.plot(min_temperature)
max_plot = matplotlib.pyplot.plot(max_temperature)
print(max_temperature)
matplotlib.pyplot.plot(max_temperature, avg_temperature, min_temperature)
avg_temperature = numpy.mean(data, axis = 0)
matplotlib.pyplot.plot(max_temperature, avg_temperature, min_temperature)
max_plot
avg_plot
min_plot
avg_plot = matplotlib.pyplot.plot(avg_temperature)
max_plot
avg_plot
min_plot
max_plot
max_plot = matplotlib.pyplot.plot(max_temperature)
min_plot = matplotlib.pyplot.plot(min_temperature)
###Output
_____no_output_____
###Markdown
Analysing tabular data we are going to use a LIBRARY called numpy
###Code
import numpy
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
Weight_kg = 55
print (Weight_kg)
print('Weight in pounds:', Weight_kg * 2.2)
Weight_kg = 57.5
print ('New weight: ', Weight_kg * 2.2)
%whos
data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
print (data)
print (type(data))
%whos
# Finding out the data type
print (data.dtype)
# Find out the shape
print (data.shape)
# This is 60 rows * 40 columns
# Getting a single number out of the array
print ("First value in data: ", data [0, 0])
###Output
First value in data: 0.0
###Markdown
print ('A middle value: ', data[30, 30])
###Code
print ('A middle value: ', data[30, 20])
# Lets get the first 10 columns for the first 4 rows
print (data[0:4, 0:10])
# Start at index 0 and go up to BUT NOT INCLUDING index 4
# We don't need to start slicing at 0
print (data [5:10, 7:15])
# We don't even need to include the UPPER and LOWER bounds
smallchunk = data [:3, 36:]
print (smallchunk)
# Arithmetic on arrays
doublesmallchunk = smallchunk * 2.0
print (doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print (triplesmallchunk)
print (numpy.mean(data))
print (numpy.transpose(data))
print (numpy.max(data))
print (numpy.min(data))
# Get a set of data for the first station
station_0 = data [0, :]
###Output
_____no_output_____
###Markdown
###Code
print (numpy.max(station_0))
# We don't need to create 'temporary' array slices
# We can refer to what we call array axes
# axis = 0 gets the mean DOWN each column, so the mean temperature for each recording period
print (numpy.mean(data, axis = 0))
# axis = 1 gets the mean ACROSS each row, so the mean temperature for each recording period
print (numpy.mean(data, axis = 1))
# do some simple visualisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
# Let's look at the average temprature over time
avg_temperature = numpy.mean(data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
###Output
_____no_output_____
###Markdown
Tasks * Produce maximum and minimum plots of this data * What do you think?
###Code
max_temprature = numpy.max(data, axis = 0)
min_temprature = numpy.min(data, axis = 0)
max_plot = matplotlib.pyplot.plot(max_temprature)
min_plot = matplotlib.pyplot.plot(min_temprature)
min_p = numpy.min(data, axis = 0)
min_plot = matplotlib.pyplot.plot(min_p)
###Output
_____no_output_____
###Markdown
Analaysing Tabular DataWe are going to use a LIBRARY called numpy
###Code
import numpy
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg = 55
print (weight_kg)
print ('Weight in pounds:', weight_kg * 2.2)
weight_kg = 57.5
print ('New weight:', weight_kg * 2.2)
%whos
data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
print (data)
print (type (data))
%whos
# Finding out the data type
print (data.dtype)
# Find out the shape
print (data.shape)
# This is 60 rows * 40 columns
# Getting a single mmuber out of the array
print ("First value in data:", data [0,0])
print ('A mimddle value:', data [30,20])
# Lets get the first 10 columns for the first 4 rows
print (data [0:4,0:10])
# Start at index 0 and go up to But Not including index 4
# We don't need to start slicing at 0
print (data [5:10, 7:15])
# We don't need to include Upper and Lower bounds
smallchunk = data [:3, 36:]
print (smallchunk)
# Arithmetic on Arrays
doublesmallchunk = smallchunk * 2.0
print (doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print (triplesmallchunk)
print (numpy.mean(data))
print (numpy.max(data))
print (numpy.min(data))
# Get a set of data for the first station
station_0 = data [0, :]
print (numpy.max(station_0))
# We dont need to create this 'temporary' array slices
# We can refer to what we call array axes
# axis = 0 gets the mean Down the column, so the mean tempreature
# for each recording period
print (numpy.mean(data, axis = 0))
# axis = 1 gets the mean across the row, so the mean tempreature
# for each recording period
print (numpy.mean(data, axis = 1))
# Do some simple Visulaisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
%whos
# Let's look at the average tempreture over time
avg_temperature = numpy.mean(data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
# Task:
# Produce maximum and minimum plots of this data
# What do you think
avg_temperature_max = numpy.max(data, axis = 0)
avg_plot_max = matplotlib.pyplot.plot(avg_temperature_max)
avg_temperature_min = numpy.min(data, axis = 0)
avg_plot_min = matplotlib.pyplot.plot(avg_temperature_min)
avg_combine_plot = matplotlib.pyplot.plot(avg_temperature_min, avg_temperature_max)
###Output
_____no_output_____
###Markdown
Analysing tabular data Analysing tabular data import numpy Analysing tabular data
###Code
import numpy
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg = 55
print (weight_kg)
print ('Weight in pounds: ', weight_kg * 2.2)
weight_kg = 57.5
print ('New weight: ', weight_kg * 2.2)
%whos
data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
print (data)
print (type(data))
%whos
# Finding out the data type
print (data.dtype)
# Find out the shape
print (data.shape)
# This is 60 rows * 40 columns
# Getting a single number out of the array
print ("First value in data: ", data [0, 0])
print ('A middle vaule: ', data[30, 20])
# Lets get the first 10 columns for the first 4 rows
print (data[0:4, 0:10])
# Start at index 0 and go up to BUT NOT INCLUDING index 4
# We don't need to start slicing at 0
print (data[5:10, 7:15])
# We don't even need to include the UPPER and LOWER bounds
smallchunk = data [:3, 36:]
print (smallchunk)
# We don't even need to include the UPPER and LOWER bounds
smallchunk =data [:3, 36:]
print (smallchunk)
# Arithmetic on arrays
doublesmallchunk = smallchunk * 2.0
print (doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print (triplesmallchunk)
print (numpy.mean(data))
print (numpy.min(data))
# Get a set of data fro the first station
station_0 = data [0, :]
print (numpy.max(station_0))
# We don't need to create ' tempory' array slices
# We can refer to what we call array axes
# axis = 0 gets the mean DOWN each column, so the mean temperature
# for each recording period
print (numpy.mean(data, axis = 0))
# axis = 0 gets the mean ACROSSS each row, so the mean temperature
# for each station for all periods
print (numpy.mean(data, axis = 1))
# Do some simple visualisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
# Let's take a look at the average temperature over time
avg_temperature = numpy.mean(data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
max_plot = matplotlib.pyplot.plot(max_temperature)
min_plot = matplotlib.pyplot.plot(min_temperature)
max_temperature = numpy.max(data, axis = 0)
min_temperature = numpy.min(data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
max_plot = matplotlib.pyplot.plot(max_temperature)
min_plot = matplotlib.pyplot.plot(min_temperature)
###Output
_____no_output_____
###Markdown
Analysing tabular data We are going to use a LIBRARY called nump numpy not nump
###Code
import numpy
numpy.loadtxt(fname='data/weather-0.1.csv',delimiter=',')
numpu.loadtxt(='data/weather-01.csv', delimiter = ',')
numpy.loadtxt(fname='Data/weather-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
Variables
###Code
weight_kg =55
print (weight_kg)
print('weight in pounds:', weight_kg * 2.2)
weight_kg = 57.5
print ('New weight:', weight_kg * 2.2)
%whos
data=numpy.loadtxt(fname='Data/weather-01.csv', delimiter=',')
print (data)
print (type(data))
%whos
# Finding out the data tye
print (data.dtype)
# Find out the shape
print (data.shape)
# This is 60 rows by 40 columns
# Gtting a single number out of the array
print ("First value in data:", data [0,0])
# First element is 0 as we are counting the number of positions from the start, ie. the first is 0 from the start and
# the last is n-1
print ('A middle value:', data[30,20])
#just named a new variable 'A middle value' and said that that variable is data from the 'data' array
#First 10 columns for the first 4 rows, taking a section of the array a slice
print (data[0:4,0:10])
#start at index 0 and go upto but not including 4, then do columns starting at 0 but not including 10,
#you end up with 4 rows 10 columns
# don't have to start a slice at 0
print (data[5:10, 7:15])
#Number of columns/rows = larger number minus smaller number
# we don't even need to include the upper or lower bounds, assumes first column/row or last column/row depending on which
#you miss out
smallchunk = data [:3, 36:]
print (smallchunk)
#starting at 0 going to column 3 and starting at row 36 going to the end
#aithmetic on arrays
doublessmallchunk = smallchunk * 2.0
# times everything in small chnk by 2.0
print (doublessmallchunk)
#tab auto completes things
triplesmallchunk = smallchunk + doublessmallchunk
# adding variables, same shape but with different values, same as timesing smallchunk by 3
print (triplesmallchunk)
print (numpy.mean(data))
#print just tells you what a thing is it doens't create it as a new variable
print (numpy.max(data))
print (numpy.min(data))
# get a set of data for the first weather station
station_0 = data [0, :]
# getting first row for all columns
print (station_0)
print (numpy.max(station_0))
# we don't need to create 'temporary' array slices
# we can refer to what we call array axes
# e.g.
print (numpy.mean(data, axis = 0))
#
print (numpy.mean(data, axis = 1))
# axes are dimensions so axes=0 are the columns and the mean of axes=0 gives you the mean of each column, mean t for each time
# axis=1 are the rows so mean of axis=1 is the mean of each row- mean T of each station
# Visualisations
# matplotlib gives you matlab like plotting functions
import matplotlib.pyplot
# matplotlib is massive so just import small parts
%matplotlib inline
# plots appear in same window
image = matplotlib.pyplot.imshow(data)
# heat map. Don't know what it represents tho
#look at average T over time
avg_Temp = numpy.mean(data, axis=0)
avg_plot = matplotlib.pyplot.plot(avg_Temp)
min_Temp = numpy.min(data, axis=0)
max_Temp = numpy.max(data, axis=0)
min_plot = matplotlib.pyplot.plot(min_Temp)
max_plot = matplotlib.pyplot.plot(max_Temp)
max_plot = matplotlib.pyplot.plot(max_Temp)
min_plot = matplotlib.pyplot.plot(min_Temp)
#plots on one graph
###Output
_____no_output_____ |
001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter07_stats/07_pymc.ipynb | ###Markdown
7.7. Fitting a Bayesian model by sampling from a posterior distribution with a Markov Chain Monte Carlo method
###Code
import numpy as np
import pandas as pd
import pymc3 as pm
import matplotlib.pyplot as plt
%matplotlib inline
# www.ncdc.noaa.gov/ibtracs/index.php?name=wmo-data
df = pd.read_csv('https://github.com/ipython-books/'
'cookbook-2nd-data/blob/master/'
'Allstorms.ibtracs_wmo.v03r05.csv?'
'raw=true',
delim_whitespace=False)
cnt = df[df['Basin'] == ' NA'].groupby(
'Season')['Serial_Num'].nunique()
# The years from 1851 to 2012.
years = cnt.index
y0, y1 = years[0], years[-1]
arr = cnt.values
# Plot the annual number of storms.
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.plot(years, arr, '-o')
ax.set_xlim(y0, y1)
ax.set_xlabel("Year")
ax.set_ylabel("Number of storms")
# We define our model.
with pm.Model() as model:
# We define our three variables.
switchpoint = pm.DiscreteUniform(
'switchpoint', lower=y0, upper=y1)
early_rate = pm.Exponential('early_rate', 1)
late_rate = pm.Exponential('late_rate', 1)
# The rate of the Poisson process is a piecewise
# constant function.
rate = pm.math.switch(switchpoint >= years,
early_rate, late_rate)
# The annual number of storms per year follows
# a Poisson distribution.
storms = pm.Poisson('storms', rate, observed=arr)
with model:
trace = pm.sample(10000)
pm.traceplot(trace)
s = trace['switchpoint'].mean()
em = trace['early_rate'].mean()
lm = trace['late_rate'].mean()
s, em, lm
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.plot(years, arr, '-o')
ax.axvline(s, color='k', ls='--')
ax.plot([y0, s], [em, em], '-', lw=3)
ax.plot([s, y1], [lm, lm], '-', lw=3)
ax.set_xlim(y0, y1)
ax.set_xlabel("Year")
ax.set_ylabel("Number of storms")
###Output
_____no_output_____ |
notebooks/Convex_optimization_model(only_with_content_embeddings).ipynb | ###Markdown
=========================================================== Solve the estimation problem using supervised dataset from the Jeopardy-like logs (ONLY WITH CONTENTS) ===========================================================Goals:1. Split the data into test and train2. Formulate the convex optimization model3. Compute train and test error Last update: 04 Dec 2019 Imports
###Code
from __future__ import division, print_function, absolute_import, unicode_literals
import cvxpy as cp
import scipy as sp
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
from collections import defaultdict
import sys
from sklearn.model_selection import KFold
from sklearn.model_selection import train_test_split
sys.path.insert(0, '../src/')
%matplotlib inline
import utils
from mytimer import Timer
###Output
_____no_output_____
###Markdown
Parameters
###Code
data_fpath = '/home/omid/Datasets/Jeopardy/supervised_data.pk'
lambdaa = 1
test_fraction = 0.2
runs = 30
###Output
_____no_output_____
###Markdown
Helper functions
###Code
def compute_matrix_err(true_matrix: np.matrix, pred_matrix: np.matrix, type_str: str = 'frob_norm') -> float:
if type_str == 'frob_norm':
frob_norm_of_difference = np.linalg.norm(true_matrix - pred_matrix)
err = frob_norm_of_difference / np.linalg.norm(true_matrix)
return err
elif type_str == 'corr':
# (r, p) = sp.stats.spearmanr(np.array(true_matrix.flatten())[0], np.array(pred_matrix.flatten())[0])
(r, p) = sp.stats.pearsonr(np.array(true_matrix.flatten())[0], np.array(pred_matrix.flatten())[0])
if p > 0.05:
r = 0
return r
else:
raise ValueError('Wrong type_str was given.')
###Output
_____no_output_____
###Markdown
Loading the data
###Code
data = utils.load_it(data_fpath)
print(len(data['X']))
mats = []
for i in range(len(data['y'])):
mats.append(data['y'][i]['influence_matrix'] / 100)
np.mean(mats, axis=0)
np.std(mats, axis=0)
###Output
_____no_output_____
###Markdown
Formulating the convex optimization problem Hyperparameter tuning
###Code
with Timer():
lambdaas = [0, 0.01, 0.05, 0.1, 0.2, 0.3, 0.5, 0.9, 1, 2, 5, 10, 100, 1000, 10000]
model_errs = defaultdict(list)
for lambdaa in lambdaas:
print('Lambda: ', lambdaa, '...')
for run in range(4):
X_train, X_test, y_train, y_test = train_test_split(
data['X'], data['y'], test_size=test_fraction)
# Solving the optimization problem.
W = cp.Variable(768, 4)
B = cp.Variable(4, 4)
constraints = []
losses = 0
for index in range(len(X_train)):
element = X_train[index]
influence_matrix = y_train[index]['influence_matrix'] / 100
C = element['content_embedding_matrix']
pred_influence_matrix = C * W + B
loss = pred_influence_matrix - influence_matrix
losses += cp.sum_squares(loss)
constraints += [pred_influence_matrix >= 0]
constraints += [cp.sum_entries(pred_influence_matrix, axis=1) == 1]
regluarization = cp.norm1(W) + cp.norm1(B)
objective = cp.Minimize(losses + lambdaa * regluarization)
prob = cp.Problem(objective, constraints)
result = prob.solve(solver=cp.MOSEK)
model_err = 0
for index in range(len(X_test)):
element = X_test[index]
influence_matrix = y_test[index]['influence_matrix'] / 100
# Optimization model prediction:
C = element['content_embedding_matrix']
predicted_influence_matrix = C * W.value + B.value
model_err += compute_matrix_err(
influence_matrix, predicted_influence_matrix)
model_err /= len(X_test)
model_errs[lambdaa].append(model_err)
errz = []
for lambdaa in lambdaas:
print(lambdaa, ': ', np.mean(model_errs[lambdaa]), '+-', np.std(model_errs[lambdaa]))
errz.append(np.mean(model_errs[lambdaa]))
plt.plot(errz);
###Output
_____no_output_____
###Markdown
Runs
###Code
lambdaa = 100
model_errs = []
random_errs = []
uniform_errs = []
for run in range(runs):
print('Run', run, '...')
X_train, X_test, y_train, y_test = train_test_split(
data['X'], data['y'], test_size=test_fraction)
# Solving the optimization problem.
with Timer():
W = cp.Variable(768, 4)
B = cp.Variable(4, 4)
constraints = []
losses = 0
for index in range(len(X_train)):
element = X_train[index]
influence_matrix = y_train[index]['influence_matrix'] / 100
C = element['content_embedding_matrix']
pred_influence_matrix = C * W + B
loss = pred_influence_matrix - influence_matrix
losses += cp.sum_squares(loss)
constraints += [pred_influence_matrix >= 0]
constraints += [cp.sum_entries(pred_influence_matrix, axis=1) == 1]
regluarization = cp.norm1(W) + cp.norm1(B)
objective = cp.Minimize(losses + lambdaa * regluarization)
prob = cp.Problem(objective, constraints)
result = prob.solve(solver=cp.MOSEK)
print('It was {} and result was {}'.format(prob.status, result))
model_err = 0
random_err = 0
uniform_err = 0
for index in range(len(X_test)):
element = X_test[index]
influence_matrix = y_test[index]['influence_matrix'] / 100
# Random model prediction:
pred_random_influence_matrix = np.matrix(utils.make_matrix_row_stochastic(
np.random.rand(4, 4)))
random_err += compute_matrix_err(
influence_matrix, pred_random_influence_matrix)
# Uniform prediction:
pred_uniform_influence_matrix = np.matrix(np.ones((4, 4)) * 0.25)
uniform_err += compute_matrix_err(
influence_matrix, pred_uniform_influence_matrix)
# Optimization model prediction:
C = element['content_embedding_matrix']
predicted_influence_matrix = C * W.value + B.value
model_err += compute_matrix_err(
influence_matrix, predicted_influence_matrix)
# err += frob_norm_of_difference
model_err /= len(X_test)
random_err /= len(X_test)
uniform_err /= len(X_test)
model_errs.append(model_err)
random_errs.append(random_err)
uniform_errs.append(uniform_err)
plt.hist(model_errs)
# plt.hist(random_errs)
plt.hist(uniform_errs)
# plt.legend(['model', 'random', 'uniform']);
plt.legend(['model', 'uniform'])
print('random: {} +- {}'.format(np.mean(random_errs), np.std(random_errs)))
print('uniform: {} +- {}'.format(np.mean(uniform_errs), np.std(uniform_errs)))
print('model: {} +- {}'.format(np.mean(model_errs), np.std(model_errs)));
###Output
random: 0.6344122315859531 +- 0.01840255448912705
uniform: 0.3452197479415015 +- 0.01875661696853107
model: 0.348088690284147 +- 0.015648935278394865
|
Training_Notebook.ipynb | ###Markdown
Pix2Pix Training Notebook Installing necessary libraries
###Code
!pip install -q albumentations==0.4.6 # Albumentations for data augumentation
!pip install -q opendatasets # To download datasets
###Output
[K |████████████████████████████████| 117 kB 10.1 MB/s
[K |████████████████████████████████| 948 kB 46.1 MB/s
[?25h Building wheel for albumentations (setup.py) ... [?25l[?25hdone
###Markdown
Importing necessary libraries- You should be able to see **Successfully imported all libraries**
###Code
try:
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, Dataset
import tqdm as tqdm
from torchvision.utils import save_image, make_grid
import albumentations as A
from albumentations.pytorch import ToTensorV2
import os
import numpy as np
from PIL import Image # Image reading
from torchvision import datasets
import matplotlib.pyplot as plt # Image display
import opendatasets as od # dataset download
%matplotlib inline
import pandas as pd # for creating Loss dataframe
print("Successfully imported all libraries")
except:
print("Errors in importing libraries")
###Output
Successfully imported all libraries
###Markdown
Cloning git repo for model functions- I have all the model and additional function classes stored in my github
###Code
!git clone https://github.com/ummadiviany/Pix2Pix
from Pix2Pix.training_notebooks.generator_model import Generator
from Pix2Pix.training_notebooks.discriminator_model import Discriminator
from Pix2Pix.training_notebooks.dataset import MapDataset
from Pix2Pix.training_notebooks.additional_functions import test_on_val_data
###Output
_____no_output_____
###Markdown
Datasets download - Attention NeededUse the below kaggle usename and key for dataset download1. Below code cell prompts for kaggle username, copy the username from below and paste and hit ⌨Enter key.2. Again prompts for kaggle secure key, copy the key from below and paste and hit ⌨Enter key.3. It will take about ~2min⏲ to download the datasets- username ▶ **iamvinayummadi** - key: ▶ **78f6cee94760fd02415c9024cba10173**
###Code
od.download('https://www.kaggle.com/vikramtiwari/pix2pix-dataset')
###Output
Please provide your Kaggle credentials to download this dataset. Learn more: http://bit.ly/kaggle-creds
Your Kaggle username: iamvinayummadi
Your Kaggle Key: ··········
Downloading pix2pix-dataset.zip to ./pix2pix-dataset
###Markdown
Setting up hyperparameters1. Change the **NUM_EPOCHS=2** if needed2. Change **BATCH_SIZE = 32** if needed
###Code
NUM_EPOCHS = 2
loss_df = pd.DataFrame(columns=['D_Loss','G_Loss'])
LEARNING_RATE = 3e-4
BATCH_SIZE = 32
NUM_WORKERS = 2
IMAGE_SIZE = 256
CHANNELS_IMG = 3
L1_LAMBDA = 100
LAMBDA_GP = 10
TRAIN_DIR = "pix2pix-dataset/maps/maps/train"
VAL_DIR = "pix2pix-dataset/maps/maps/val"
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Device :',DEVICE)
###Output
_____no_output_____
###Markdown
Loading training and validation data
###Code
train_dataset = MapDataset(root_dir= TRAIN_DIR, input_size=600,direction=0)
train_loader = DataLoader(train_dataset,batch_size= BATCH_SIZE,shuffle=True,num_workers= NUM_WORKERS)
val_dataset = MapDataset(root_dir= VAL_DIR, input_size=600,direction=0)
validation_loader = DataLoader(val_dataset,batch_size=4,shuffle=True)
###Output
_____no_output_____
###Markdown
Model instances, optimizers, learning_rate schedulers, and loss functions1. Adam optimizer(lr = 2e-4,betas=(0.5,0.99) with stepwise learning rate decay is used. Learning rate decay by factor of 10 for every 20 epochs.2. BCE Loss for Discriminator and BCE + L1 Loss for Generator
###Code
disc = Discriminator(in_channels=3).to( DEVICE)
gen_model = Generator(in_channels=3,features=64).to( DEVICE)
opt_disc = optim.Adam(disc.parameters(),lr= LEARNING_RATE,betas=(0.5,0.999))
opt_gen = optim.Adam(gen_model.parameters(),lr= LEARNING_RATE,betas=(0.5,0.999))
scheduler_disc = optim.lr_scheduler.StepLR(opt_disc, step_size=20, gamma=0.1)
scheduler_gen = optim.lr_scheduler.StepLR(opt_gen, step_size=20, gamma=0.1)
BCE = nn.BCEWithLogitsLoss()
L1_LOSS = nn.L1Loss()
###Output
_____no_output_____
###Markdown
Training loop* Prints Epoch, Batch, Discriminator Loss, Generator Loss* Saves an Image📺 with name format input_label_gen_.png for visualization. Please check that image📺
###Code
for epoch in range( NUM_EPOCHS):
print(f"Epoch[{epoch}/{NUM_EPOCHS}], Learning Rate = {opt_disc.param_groups[0]['lr']}") # printing learning rate
for idx,(inputs,outputs) in enumerate(train_loader): #enumerating thorugh train-dataset
inputs,outputs=inputs.to( DEVICE), outputs.to( DEVICE) # sending to GPU
#Train Discriminator
outputs_fake = gen_model(inputs) #Generating translated images
D_real = disc(inputs,outputs) # Discriminator call on inputs and outputs ones
D_real_loss = BCE(D_real,torch.ones_like(D_real)) # Calculates loss value
D_fake = disc(inputs,outputs_fake.detach()) # Discriminator call on inputs and genrated ones
D_fake_loss = BCE(D_fake,torch.zeros_like(D_fake)) # Calculates loss value
D_loss = (D_real_loss+D_fake_loss)/2 # Aggeregate loss
opt_disc.zero_grad() # clearing optimizer gradients
D_loss.backward() # Backward function call
opt_disc.step() # Taking one optimizer step
# Train Generator
D_fake = disc(inputs,outputs_fake) # Discriminator call on inputs and genrated ones
G_fake_loss = BCE(D_fake,torch.ones_like(D_fake)) # Calculates loss value
L1 = L1_LOSS(outputs_fake,outputs)* L1_LAMBDA # Calculates loss value
G_loss = G_fake_loss+L1 # Generator loss
opt_gen.zero_grad() # clearing optimizer gradients
G_loss.backward() # Backward function call
opt_gen.step() # Taking one optimizer step
loss_df.loc[len(loss_df)] = [D_loss.mean().item(),G_loss.mean().item()] # save loss value in dataframe row
loss_df.to_csv('losses.csv',index=False) # write datafram file to disk
print(f"Epoch [{epoch+1}/{NUM_EPOCHS}] Batch [{idx+1}/{len(train_loader)}] PatchGAN_Loss : {D_loss.mean().item():.4f} Generator_Loss : {G_loss.mean().item():.4f}")
test_on_val_data(epoch, "/content/", gen_model, validation_loader, DEVICE)
print('See the generated image at/content/input_label_gen_.png')
# Learning rate update with LR Scheduler
scheduler_disc.step() # take one scheduler step
scheduler_gen.step() # take one scheduler step
###Output
_____no_output_____
###Markdown
Visualising results- Let's👀see how are results at second epoch. Network needs to train for more than 250 epoch to get better results.
###Code
def visualize(image):
img = Image.open(image)
plt.figure(figsize=(15,20))
plt.title('1st Row = Input Image, 2nd Row = Target Image, 3rd Row = Translated Image')
plt.axis('off')
plt.imshow(img)
plt.show()
visualize('input_label_gen_2.png')
###Output
_____no_output_____
###Markdown
Plotting Loss values
###Code
df_loss = pd.read_csv('losses.csv')
plt.plot(df_loss['D_Loss'],label='PatchGAN Loss')
plt.plot(df_loss['G_Loss'],label='Generator Loss')
plt.xlabel('No of Batch Iterations')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Entrenamiento con backbone de Faster R-CNN para Detección de padecimeintos en radiografías de Tórax Importaciones de librerias y Módulos necesarios
###Code
import sys
sys.path.append('D:/GitHub/Mariuki/DiseaseDetector/Detector de Padecimientos Rayos-X Torax - Codigo')
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
import pathlib
import albumentations as A
import numpy as np
from torch.utils.data import DataLoader
from datasets import ObjectDetectionDataSet
from transformations import ComposeDouble, Clip, AlbumentationWrapper, FunctionWrapperDouble, normalize_01
from utils import get_filenames_of_path, collate_double, read_json, log_mapping_neptune,log_model_neptune, log_packages_neptune
from neptunecontrib.api import log_table
import importlib_metadata
from pytorch_lightning.loggers.neptune import NeptuneLogger
from pytorch_lightning import Trainer
from pytorch_lightning import seed_everything
from pytorch_lightning.callbacks import ModelCheckpoint,LearningRateMonitor,EarlyStopping
from faster_RCNN import FasterRCNN_lightning
from faster_RCNN import get_fasterRCNN_mobilenet, get_fasterRCNN_resnet, get_fasterRCNN_mobilenet, get_fasterRCNN_shufflenet_v2, get_fasterRCNN_efficientnet
# Hiperparámetros
params = {'OWNER': 'rubsini', # Nombre de usuario en Neptune.ai
'SAVE_DIR': "../Experiments/", # Directorio para guardar los checkpoints en entrenamiento
'PROJECT': 'DiseasesDetection', # Nombre dle proyecto creado en Neptune.ai
'EXPERIMENT': 'chests', # nombre del experimento dentro del proyecto
'LOG_MODEL': False, # Si se cargará el modelo a neptune después del entrenamiento
'GPU': 1, # Activar o desactivar para elegir entrenar en GPU o CPU
'BATCH_SIZE': 8, # Tamaño del lote
'LR': 0.001, # Tasa de aprendizaje
'PRECISION': 16, # Precisión de cálculo
'CLASSES': 8, # Número de clases (incluyendo el fondo)
'SEED': 42, # Semilla de aleatoreidad
'MAXEPOCHS': 500, # Número máximo de épocas
"PATIENCE": 50, # Número de épocas sin mejora para terminal el entrenamiento
'BACKBONE': 'shufflenet_v2_x0_5', # Aruitectura a utilizar como base para Faster R-CNN
'FPN': False, # Activar uso o no de FPN
'ANCHOR_SIZE': ((32, 64, 128, 256, 512),), # Tamaños de las Cajas Acla
'ASPECT_RATIOS': ((0.5, 1.0, 2.0),), # Relaciones de aspectod e als cajas ancla
'MIN_SIZE': 1024, # Tamaño mínimo de las imágenes
'MAX_SIZE': 1024, # Tamaño máximo de las imágenes
'IMG_MEAN': [0.485, 0.456, 0.406], # Medias de ImageNet (Donde se preentrenaron los modelos)
'IMG_STD': [0.229, 0.224, 0.225], # Desviaciones estándar de ImageNet (Donde se preentrenaron los modelos)
'IOU_THRESHOLD': 0.5 # Umbral de Intersección sobre Union para evaluar predicciones en entrenamiento
}
###Output
_____no_output_____
###Markdown
Configuraciones y Carga de datos
###Code
# Llave personal de usuario obtenida de Neptune.ai
api_key = os.getenv("NEPTUNE")
# Se puede copiar y poner directamente la llave. O configurar como variable de entorno
# Crear y obtener el directorio para guardar los checkpoints
save_dir = os.getcwd() if not params["SAVE_DIR"] else params["SAVE_DIR"]
# Directorio donde se enceuentran las imágenes y etiquetas para entrenamiento
root = pathlib.Path('../data/ChestXRay8')
# Cargar las imágenes y las etiquetas
inputs = get_filenames_of_path(root / 'ChestBBImages')
targets = get_filenames_of_path(root / 'ChestBBLabels')
# Ordenar entradas y objetivos
inputs.sort()
targets.sort()
# Mapear las etiquetas con valores enteros
mapping = read_json(pathlib.Path('LabelsMappping.json'))
mapping
###Output
_____no_output_____
###Markdown
Transformaciones, Creación de Datasets y Dataloaders
###Code
# Transformaciones iniciales de entrenameinto (formato, normalizacion a media 0 y std 1)
# Aumentado con volteos y rescalados
transforms_training = ComposeDouble([
Clip(),
# AlbumentationWrapper(albumentation=A.HorizontalFlip(p=0.5)),
# AlbumentationWrapper(albumentation=A.RandomScale(p=0.5, scale_limit=0.5)),
# AlbuWrapper(albu=A.VerticalFlip(p=0.5)),
# FunctionWrapperDouble(np.moveaxis, source=-1, destination=0),
FunctionWrapperDouble(normalize_01)#,
# RescaleWithBB([256],'bilinear')
])
# Transformaciones para validación (formato, normalizacion a media 0 y std 1)
transforms_validation = ComposeDouble([
Clip(),
# FunctionWrapperDouble(np.moveaxis, source=-1, destination=0),
FunctionWrapperDouble(normalize_01),
])
# Transformaciones para datos de prueba (formato, normalizacion a media 0 y std 1)
transforms_test = ComposeDouble([
Clip(),
# FunctionWrapperDouble(np.moveaxis, source=-1, destination=0),
FunctionWrapperDouble(normalize_01),
])
seed_everything(params['SEED']) # Semilla de aleatoreidad
###Output
Global seed set to 42
###Markdown
Division del conjunto de datos en subconjuntos: (entrenamiento, validación y prueba)
###Code
# Parrticipación estratificada: misma cantidad de instancias respecto a sus etiquetas en cada subconjunto
StratifiedPartition = read_json(pathlib.Path('DatasetSplits/ChestXRay8/split1.json'))
inputs_train = [pathlib.Path('C:/Users/mario/Desktop/ChestXRay8/256/ChestBBImages/' + i[:-4] + '.png') for i in list(StratifiedPartition['Train'].keys())]
targets_train = [pathlib.Path('C:/Users/mario/Desktop/ChestXRay8/256/ChestBBLabels/' + i[:-4] + '.json') for i in list(StratifiedPartition['Train'].keys())]
inputs_valid = [pathlib.Path('C:/Users/mario/Desktop/ChestXRay8/256/ChestBBImages/' + i[:-4] + '.png') for i in list(StratifiedPartition['Val'].keys())]
targets_valid = [pathlib.Path('data/ChestXRay8/256/ChestBBLabels/' + i[:-4] + '.json') for i in list(StratifiedPartition['Val'].keys())]
inputs_test = [pathlib.Path('C:/Users/mario/Desktop/ChestXRay8/256/ChestBBImages/' + i[:-4] + '.png') for i in list(StratifiedPartition['Test'].keys())]
targets_test = [pathlib.Path('C:/Users/mario/Desktop/ChestXRay8/256/ChestBBLabels/' + i[:-4] + '.json') for i in list(StratifiedPartition['Test'].keys())]
lt = len(inputs_train)+len(inputs_valid)+len(inputs_test)
ltr,ptr,lvd,pvd,lts,pts = len(inputs_train), len(inputs_train)/lt, len(inputs_valid), len(inputs_valid)/lt, len(inputs_test), len(inputs_test)/lt
print('Total de datos: {}\nDatos entrenamiento: {} ({:.2f}%)\nDatos validación: {} ({:.2f}%)\nDatos Prueba: {} ({:.2f}%)'.format(lt,ltr,ptr,lvd,pvd,lts,pts))
# Crear conjunto de datos de entrenamiento
dataset_train = ObjectDetectionDataSet(inputs=inputs_train,
targets=targets_train,
transform=transforms_training,
add_dim = 3,
use_cache=True,
convert_to_format=None,
mapping=mapping,
tgt_int64=True)
# Crear conjunto de datos de validación
dataset_valid = ObjectDetectionDataSet(inputs=inputs_valid,
targets=targets_valid,
transform=transforms_validation,
add_dim = 3,
use_cache=True,
convert_to_format=None,
mapping=mapping,
tgt_int64=True)
# Crear conjunto de datos de prueba
dataset_test = ObjectDetectionDataSet(inputs=inputs_test,
targets=targets_test,
transform=transforms_test,
add_dim = 3,
use_cache=True,
convert_to_format=None,
mapping=mapping,
tgt_int64=True)
# Crear cargador de datos de entrenamiento
dataloader_train = DataLoader(dataset=dataset_train,
batch_size=params['BATCH_SIZE'],
shuffle=True,
num_workers=6,
collate_fn=collate_double)
# Crear cargador de datos de validacion
dataloader_valid = DataLoader(dataset=dataset_valid,
batch_size=params['BATCH_SIZE'],
shuffle=False,
num_workers=6,
collate_fn=collate_double)
# Crear cargador de datos de prueba
dataloader_test = DataLoader(dataset=dataset_test,
batch_size=params['BATCH_SIZE'],
shuffle=False,
num_workers=6,
collate_fn=collate_double)
###Output
_____no_output_____
###Markdown
Preparación de entorno para correr modelo
###Code
#Cargador a Neptune
neptune_logger = NeptuneLogger(
api_key=api_key,
project_name=f'{params["OWNER"]}/{params["PROJECT"]}',
experiment_name=params['EXPERIMENT'],
params=params
)
assert neptune_logger.name # Se obtiene una solicitud http para verificar la existencia del proyecto en neptune
# Inicializar el modelo
model = get_fasterRCNN_shufflenet_v2(num_classes=params['CLASSES'], ## get_fasterRCNN_resnet, get_fasterRCNN_mobilenet, get_fasterRCNN_shufflenet_v2, get_fasterRCNN_efficientnet
backbone_name= params['BACKBONE'],
anchor_size=params['ANCHOR_SIZE'],
aspect_ratios=params['ASPECT_RATIOS'],
fpn=params['FPN'],
min_size=params['MIN_SIZE'],
max_size=params['MAX_SIZE'])
# Inicializador de Pytorch Lightning
task = FasterRCNN_lightning(model=model, lr=params['LR'], iou_threshold=params['IOU_THRESHOLD'])
# Monitoreos
checkpoint_callback = ModelCheckpoint(monitor='Validation_mAP', mode='max')
learningrate_callback = LearningRateMonitor(logging_interval='step', log_momentum=False)
early_stopping_callback = EarlyStopping(monitor='Validation_mAP', patience=50, mode='max')
# Inicializador del entrenamiento
trainer = Trainer(gpus=params["GPU"],
precision=params['PRECISION'], # Al probar con 16, enable_pl_optimizer=False
callbacks=[checkpoint_callback, learningrate_callback, early_stopping_callback],
default_root_dir=save_dir, # Directorio para guardar los checkpoints
logger=neptune_logger,
log_every_n_steps=1,
num_sanity_val_steps=0,
benchmark = True#,
#accumulate_grad_batches=4#, # Tambien se puede diccionario para modificar el numero de accumulated batches en cada epoca {indexEpoch:Num.Acc.Batches}
# enable_pl_optimizer=False, # Se descomenta cuando se usa precisión de 16
)
###Output
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
###Markdown
Ejecutar entrenamiento
###Code
# Comenzar el entrenamiento-validación
trainer.max_epochs = params['MAXEPOCHS']
trainer.fit(task,
train_dataloader=dataloader_train,
val_dataloaders=dataloader_valid)
###Output
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
-------------------------------------
0 | model | FasterRCNN | 40.4 M
-------------------------------------
40.4 M Trainable params
0 Non-trainable params
40.4 M Total params
161.426 Total estimated model params size (MB)
###Markdown
Prueba post-entrenamiento y Carga de datos a Neptune.ai
###Code
## Obtener el mejor modelo y usarlo para predecir la información de prueba,
# basado en el conjunto de datos de validación y conforme a la metrica usada (mAP from pascal VOC)
# Realizar evaluación con el cubconjunto de prueba
trainer.test(ckpt_path="best", test_dataloaders=dataloader_test)
# Cargar los paquetes utilizados a neptune
log_packages_neptune(neptune_logger)
# Cargar el mapeo de clases con valores enteros a neptune
log_mapping_neptune(mapping, neptune_logger)
# Cargar el modelo a neptune
if params['LOG_MODEL']:
checkpoint_path = pathlib.Path(checkpoint_callback.best_model_path)
log_model_neptune(checkpoint_path=checkpoint_path,
save_directory=pathlib.Path.home(),
name='best_model.pt',
neptune_logger=neptune_logger)
# Parar el cargador
neptune_logger.experiment.stop()
print("Finished")
###Output
_____no_output_____ |
Wine Qual KNN ACM.ipynb | ###Markdown
**1)EDA** A.Understanding the data
###Code
df.shape
df.head()
df.isnull().sum()
df.dtypes
###Output
_____no_output_____
###Markdown
**B.Visualizing**
###Code
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#Finding correlation
#Issue 1: Plot a correlation heatmap (including all columns)
print("UNIVARIATE ANALYSIS-HISTOGRAMS:")
for i in [0,1,2,9,10]:
plt.subplots(figsize=(10,5))
sns.distplot(df.iloc[:,i],color='purple',bins=15)
plt.show()
#Issue 2: A boxplot has been given by the developer only for citric acid and density. Plot similar boxplots for each of the columns against quality (using the feature 'quality' as one of the axes)
#Mention if any linear trends are clearly noticeable
sns.boxplot(x=df.iloc[:,11],y = df.iloc[:,2],palette="cool")
plt.show()
print('Trend: Median value of Citric acid distribution increases as Wine quality increases.')
sns.boxplot(x=df.iloc[:,11],y = df.iloc[:,7],palette="cool")
plt.show()
print('Trend: No linear trend')
#Issue 3: Plot scatter plots amongst the feature columns (considering all possible combinations) with the hue as "quality" and mention trends/patterns if any
#refer to the below plot for an example:
sns.scatterplot(x=df.iloc[:,9],y=df.iloc[:,1],hue=df["quality"])
plt.show()
print("Pattern: Yes, Wine's with higher sulphates content(0.75-1.00) and lower volatile acidity(0.2-0.4) tend to have a higher quality")
###Output
_____no_output_____
###Markdown
**C.Feature selection and data scaling**
###Code
from sklearn.neighbors import KNeighborsClassifier
y=df.iloc[:,11]
X=df.iloc[:,[1,2,9,10]] #Using only top 4 columns with highest correlation to quality
#Scaling the data
#Issue number 4: Scale the data (use variable name scaler1 to define the scaler)
X=scaler1.fit_transform(X)
###Output
_____no_output_____
###Markdown
**2.Model creation**
###Code
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=41,stratify=y)
#Issue 5: Create a KNeighbours Classifier Model with default prameters an print the accuracy on the test data
#use variable 'model1' to instantiate your model
facc=model1.score(X_test,y_test)
print("\n\nAccuracy of final model is: ", facc*100, "%\n\n")
#Issue 6: In a new cell below, improve the KNN Classifier model by tuning the parameters of the KNeighboursClassifier. Do not change any of the code above. Only a model with accuracy above 74% will be accepted.
###Output
_____no_output_____
###Markdown
**1)EDA** A.Understanding the data
###Code
df.head()
df.isnull().sum()
print("hi T")
df.dtypes
###Output
_____no_output_____
###Markdown
**B.Visualizing**
###Code
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#Finding correlation
#Issue 1: Plot a correlation heatmap (including all columns)
print("UNIVARIATE ANALYSIS-HISTOGRAMS:")
for i in [0,1,2,9,10]:
plt.subplots(figsize=(10,5))
sns.distplot(df.iloc[:,i],color='purple',bins=15)
plt.show()
#Issue 2: A boxplot has been given by the developer only for citric acid and density. Plot similar boxplots for each of the columns against quality (using the feature 'quality' as one of the axes)
#Mention if any linear trends are clearly noticeable
sns.boxplot(x=df.iloc[:,11],y = df.iloc[:,2],palette="cool")
plt.show()
print('Trend: Median value of Citric acid distribution increases as Wine quality increases.')
sns.boxplot(x=df.iloc[:,11],y = df.iloc[:,7],palette="cool")
plt.show()
print('Trend: No linear trend')
#Issue 3: Plot scatter plots amongst the feature columns (considering all possible combinations) with the hue as "quality" and mention trends/patterns if any
#refer to the below plot for an example:
sns.scatterplot(x=df.iloc[:,9],y=df.iloc[:,1],hue=df["quality"])
plt.show()
print("Pattern: Yes, Wine's with higher sulphates content(0.75-1.00) and lower volatile acidity(0.2-0.4) tend to have a higher quality")
###Output
_____no_output_____
###Markdown
**C.Feature selection and data scaling**
###Code
from sklearn.neighbors import KNeighborsClassifier
y=df.iloc[:,11]
X=df.iloc[:,[1,2,9,10]] #Using only top 4 columns with highest correlation to quality
#Scaling the data
#Issue number 4: Scale the data (use variable name scaler1 to define the scaler)
X=scaler1.fit_transform(X)
###Output
_____no_output_____
###Markdown
**2.Model creation**
###Code
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=41,stratify=y)
#Issue 5: Create a KNeighbours Classifier Model with default prameters an print the accuracy on the test data
#use variable 'model1' to instantiate your model
facc=model1.score(X_test,y_test)
print("\n\nAccuracy of final model is: ", facc*100, "%\n\n")
#Issue 6: In a new cell below, improve the KNN Classifier model by tuning the parameters of the KNeighboursClassifier. Do not change any of the code above. Only a model with accuracy above 74% will be accepted.
print("end")
###Output
end
|
discrete_signals/operations.ipynb | ###Markdown
Discrete Signals*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* Elementary OperationsOperations like superposition, shifting and flipping can be used to construct signals with a more complex structure than by the [standard signals](standard_signals.ipynb) alone. In the following, a set of elementary operations is introduced that are frequently used in discrete signal processing for this purpose. Note that the equivalent operation to the [temporal scaling of a continuous signal](../continuous_signals/operations.ipynbTemporal-Scaling) is not defined for a discrete signal. SuperpositionThe weighted superposition $x[k]$ of two signals $x_1[k]$ and $x_2[k]$ is given as\begin{equation}x[k] = A \cdot x_1[k] + B \cdot x_2[k]\end{equation}with the complex weights $A, B \in \mathbb{C}$. **Example**The following example illustrates the superposition of two harmonic signals $x(t) = A \cdot \cos[\Omega_1 k] + B \cdot \cos[\Omega_2 k]$ with weights $A$, $B$ and normalized frequencies $\Omega_1$ and $\Omega_2$.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
k = np.arange(0, 51)
x = np.cos(2 * np.pi / 10 * k) + 2 * np.cos(2 * np.pi / 15 * k)
plt.figure(figsize=(6, 3))
plt.stem(k, x)
plt.xlabel('$k$')
plt.ylabel('$x[k]$')
plt.gca().margins(y=0.1)
###Output
_____no_output_____
###Markdown
ShiftThe shift of a signal $s[k]$ by the index $\kappa$ is defined as\begin{equation}x[k] = s[k-\kappa]\end{equation}with $\kappa \in \mathbb{Z}$. The signal $s[k]$ is* shifted to the right for $\kappa > 0$* shifted to the left for $\kappa < 0$The shift of a signal is a frequently applied operation in discrete signal processing. For instance for the description of systems by linear difference equations with constant coefficients. For a discrete signal which has been derived by [temporal sampling from a continuous signal](../sampling/ideal.ipynb), the shift can be interpreted as [temporal shift](../continuous_signals/operations.ipynbTemporal-Shift) by the time $\tau = \kappa \cdot T$ where $T$ denotes the sampling interval. **Example**In order to illustrate the shifting of signals, the construction of a [sawtooth signal](https://en.wikipedia.org/wiki/Sawtooth_wave) by a superposition of shifted ramp signals $k \cdot \text{rect}_N[k]$ is shown. The sawtooth signal is given as periodic continuation of the ramp signal\begin{equation}x[k] = \sum_{\nu = -\infty}^{\infty} (k - \nu \cdot N) \cdot \text{rect}_N[k - \nu \cdot N]\end{equation}The signal can be computed efficiently using the [modulo operation](https://en.wikipedia.org/wiki/Modulo_operation)\begin{equation}x[k] = k \bmod N\end{equation}which is illustrated in the following
###Code
def sawtooth(k, N):
return np.mod(k, N)
k = np.arange(-10, 40)
x = sawtooth(k, 10)
plt.figure(figsize=(6, 3))
plt.stem(k, x)
plt.xlabel('$k$')
plt.ylabel('$x[k]$')
plt.gca().margins(y=0.1)
###Output
_____no_output_____
###Markdown
FlippingThe flipping of a signal $s[k]$ is defined as\begin{equation}x[k] = s[\kappa - k]\end{equation}with $\kappa \in \mathbb{Z}$. The flipping operation can also be represented as a reversal of the index $k$ of the signal $s[k]$ followed by a shift of $\kappa$ of the reversed signal, as $s[\kappa - k] = s[- (k - \kappa)]$. The operation can interpreted geometrically as a mirroring of the signal $s[k]$ at the vertical axis $k = \frac{\kappa}{2}$.For $\kappa = 0$ this results in a reversal of the signal. The reversal can be interpreted as time-reversal for a discrete signal which has been derived by temporal sampling from a continuous signal. **Example**The following example illustrates the temporal flipping of the sawtooth signal $x[k]$ introduced above for $\kappa = 3$.
###Code
x = sawtooth(3 - k, 10)
plt.figure(figsize=(6, 3))
plt.stem(k, x)
plt.xlabel('$k$')
plt.ylabel('$x[k]$')
plt.gca().margins(y=0.1)
###Output
_____no_output_____
###Markdown
Discrete Signals*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* Elementary OperationsOperations like superposition, shifting and flipping can be used to construct signals with a more complex structure than by the [standard signals](standard_signals.ipynb) alone. In the following, a set of elementary operations is introduced that are frequently used in discrete signal processing for this purpose. Note that the equivalent operation to the [temporal scaling of a continuous signal](../continuous_signals/operations.ipynbTemporal-Scaling) is not defined for a discrete signal. SuperpositionThe weighted superposition $x[k]$ of two signals $x_1[k]$ and $x_2[k]$ is given as\begin{equation}x[k] = A \cdot x_1[k] + B \cdot x_2[k]\end{equation}with the complex weights $A, B \in \mathbb{C}$. **Example**The following example illustrates the superposition of two harmonic signals $x(t) = A \cdot \cos[\Omega_1 k] + B \cdot \cos[\Omega_2 k]$ with weights $A$, $B$ and normalized frequencies $\Omega_1$ and $\Omega_2$.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
k = np.arange(0, 51)
x = np.cos(2 * np.pi / 10 * k) + 2 * np.cos(2 * np.pi / 15 * k)
plt.figure(figsize=(6, 3))
plt.stem(k, x)
plt.xlabel('$k$')
plt.ylabel('$x[k]$')
plt.gca().margins(y=0.1)
###Output
_____no_output_____
###Markdown
ShiftThe shift of a signal $s[k]$ by the index $\kappa$ is defined as\begin{equation}x[k] = s[k-\kappa]\end{equation}with $\kappa \in \mathbb{Z}$. The signal $s[k]$ is* shifted to the right for $\kappa > 0$* shifted to the left for $\kappa < 0$The shift of a signal is a frequently applied operation in discrete signal processing. For instance for the description of systems by linear difference equations with constant coefficients. For a discrete signal which has been derived by [temporal sampling from a continuous signal](../sampling/ideal.ipynb), the shift can be interpreted as [temporal shift](../continuous_signals/operations.ipynbTemporal-Shift) by the time $\tau = \kappa \cdot T$ where $T$ denotes the sampling interval. **Example**In order to illustrate the shifting of signals, the construction of a [sawtooth signal](https://en.wikipedia.org/wiki/Sawtooth_wave) by a superposition of shifted ramp signals $k \cdot \text{rect}_N[k]$ is shown. The sawtooth signal is given as periodic continuation of the ramp signal\begin{equation}x[k] = \sum_{\nu = -\infty}^{\infty} (k - \nu \cdot N) \cdot \text{rect}_N[k - \nu \cdot N]\end{equation}The signal can be computed efficiently using the [modulo operation](https://en.wikipedia.org/wiki/Modulo_operation)\begin{equation}x[k] = k \bmod N\end{equation}which is illustrated in the following
###Code
def sawtooth(k, N):
return np.mod(k, N)
k = np.arange(-10, 40)
x = sawtooth(k, 10)
plt.figure(figsize=(6, 3))
plt.stem(k, x)
plt.xlabel('$k$')
plt.ylabel('$x[k]$')
plt.gca().margins(y=0.1)
###Output
_____no_output_____
###Markdown
FlippingThe flipping of a signal $s[k]$ is defined as\begin{equation}x[k] = s[\kappa - k]\end{equation}with $\kappa \in \mathbb{Z}$. The flipping operation can also be represented as a reversal of the index $k$ of the signal $s[k]$ followed by a shift of $\kappa$ of the reversed signal, as $s[\kappa - k] = s[- (k - \kappa)]$. The operation can interpreted geometrically as a mirroring of the signal $s[k]$ at the vertical axis $k = \frac{\kappa}{2}$.For $\kappa = 0$ this results in a reversal of the signal. The reversal can be interpreted as time-reversal for a discrete signal which has been derived by temporal sampling from a continuous signal. **Example**The following example illustrates the temporal flipping of the sawtooth signal $x[k]$ introduced above for $\kappa = 3$.
###Code
x = sawtooth(3 - k, 10)
plt.figure(figsize=(6, 3))
plt.stem(k, x)
plt.xlabel('$k$')
plt.ylabel('$x[k]$')
plt.gca().margins(y=0.1)
###Output
_____no_output_____ |
notebooks/tokenize_and_create_TFRecord.ipynb | ###Markdown
Load dataset and tokenize data
###Code
# Load wine dataset
wines_path = "C:/Users/david/Documents/github/this-wine-does-not-exist/data/scraped/name_desc_nlp_ready.txt"
with open(wines_path, 'r', encoding='utf8') as f:
wines_raw = f.read().splitlines()
print(f"Loaded wine dataset of length: {len(wines_raw):,}")
# Remove wines with too short descriptions
wines_clean = []
for i in wines_raw:
try:
desc = i.split("[description]")[1]
if len(desc) > 150:
wines_clean.append(i)
except:
pass
print(f"Cleaned dataset has {len(wines_clean):,} samples")
tokenizer = transformers.GPT2TokenizerFast.from_pretrained('gpt2')
print("Loaded tokenizer")
tokenizer.add_special_tokens(
{'eos_token':'<|startoftext|>',
'bos_token':'<|startoftext|>'
}
)
tokenizer.add_tokens(['[prompt]','[response]','[category_1]',
'[category_2]','[origin]','[description]',
'<|endoftext|>'])
tokenizer.pad_token = tokenizer.eos_token
print("Modified tokenizer tokens")
#tokenizer_path = f'./tokenizer_gpt2'
#tokenizer.save_pretrained(tokenizer_path)
#print(f"Saved tokenizer to {tokenizer_path}")
wine_encodings = tokenizer(wines_clean, max_length=250, padding=True, truncation=True)
print(f"Encoded dataset with attributes: {wine_encodings.keys()}")
print(f"Total encoded samples: {len(wine_encodings['input_ids']):,}")
tokenizer.vocab_size
###Output
_____no_output_____
###Markdown
Serialize to TFRecord
###Code
tfrecord_file_name = "scraped_wines_tfr"
with tf.compat.v1.python_io.TFRecordWriter(tfrecord_file_name) as writer:
for ix, wine_desc in enumerate(wines_clean):
features = tf.train.Features(
feature = {
'text': tf.train.Feature(
bytes_list = tf.train.BytesList(value = [bytes(wine_desc, 'utf-8')])),
'input_ids': tf.train.Feature(
int64_list = tf.train.Int64List(value = wine_encodings['input_ids'][ix])),
'attention_mask': tf.train.Feature(
int64_list = tf.train.Int64List(value = wine_encodings['attention_mask'][ix]))
}
)
example = tf.train.Example(features=features)
writer.write(example.SerializeToString())
###Output
_____no_output_____ |
notebooks/8.5-introduction-to-gans.ipynb | ###Markdown
---title: "Introduction to generative adversarial networks"output: html_notebook: theme: cerulean highlight: textmate---
###Code
knitr::opts_chunk$set(warning = FALSE, message = FALSE)
###Output
_____no_output_____
###Markdown
***This notebook contains the code samples found in Chapter 8, Section 5 of [Deep Learning with R](https://www.manning.com/books/deep-learning-with-r). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.*** A schematic GAN implementation In this section, we'll explain how to implement a GAN in Keras, in its barest form -- because GANs are advanced, diving deeply into the technical details would be out of scope for this book. The specific implementation is a _deep convolutional GAN_ (DCGAN): a GAN where the generator and discriminator are deep convnets. In particular, it uses a `layer_conv_2d_transpose()` for image upsampling in the generator.We will train our GAN on images from CIFAR10, a dataset of 50,000 32x32 RGB images belong to 10 classes (5,000 images per class). To make things even easier, we will only use images belonging to the class "frog".Schematically, our GAN looks like this:* A `generator` network maps vectors of shape `(latent_dim)` to images of shape `(32, 32, 3)`.* A `discriminator` network maps images of shape (32, 32, 3) to a binary score estimating the probability that the image is real.* A `gan` network chains the generator and the discriminator together: `gan(x) <- discriminator(generator(x))`. Thus this `gan` network maps latent space vectors to the discriminator's assessment of the realism of these latent vectors as decoded by the generator.* We train the discriminator using examples of real and fake images along with "real"/"fake" labels, as we would train any regular image classification model.* To train the generator, we use the gradients of the generator's weights with regard to the loss of the `gan` model. This means that, at every step, we move the weights of the generator in a direction that will make the discriminator more likely to classify as "real" the images decoded by the generator. I.e. we train the generator to fool the discriminator. A bag of tricksTraining GANs and tuning GAN implementations is notoriously difficult. There are a number of known "tricks" that one should keep in mind. Like most things in deep learning, it is more alchemy than science: these tricks are really just heuristics, not theory-backed guidelines. They are backed by some level of intuitive understanding of the phenomenon at hand, and they are known to work well empirically, albeit not necessarily in every context.Here are a few of the tricks that we leverage in our own implementation of a GAN generator and discriminator below. It is not an exhaustive list of GAN-related tricks; you will find many more across the GAN literature.* We use `tanh` as the last activation in the generator, instead of `sigmoid`, which is more commonly found in other types of models.* We sample points from the latent space using a _normal distribution_ (Gaussian distribution), not a uniform distribution.* Stochasticity is good to induce robustness. Because GAN training results in a dynamic equilibrium, GANs are likely to get stuck in all sorts of ways. Introducing randomness during training helps prevent this. We introduce randomness in two ways: by using dropout in the discriminator and by adding random noise to the labels for the discriminator.* Sparse gradients can hinder GAN training. In deep learning, sparsity is often a desirable property, but not in GANs. Two things can induce gradient sparsity: max pooling operations and ReLU activations. Instead of max pooling, we recommend using strided convolutions for downsampling, and we recommend using a `layer_activation_leaky_relu()` instead of a ReLU activation. It's similar to ReLU, but it relaxes sparsity constraints by allowing small negative activation values.* In generated images, it's common to see checkerboard artifacts caused by unequal coverage of the pixel space in the generator (see figure 8.17). To fix this, we use a kernel size that is divisible by the stride size whenever we use a strided `layer_conv_2d_transpose()` or `layer_conv_2d()` in both the generator and the discriminator. The generatorFirst, we develop a `generator` model, which turns a vector (from the latent space -- during training it will sampled at random) into a candidate image. One of the many issues that commonly arise with GANs is that the generator gets stuck with generated images that look like noise. A possible solution is to use dropout on both the discriminator and generator.
###Code
library(keras)
latent_dim <- 32
height <- 32
width <- 32
channels <- 3
generator_input <- layer_input(shape = c(latent_dim))
generator_output <- generator_input %>%
# First, transform the input into a 16x16 128-channels feature map
layer_dense(units = 128 * 16 * 16) %>%
layer_activation_leaky_relu() %>%
layer_reshape(target_shape = c(16, 16, 128)) %>%
# Then, add a convolution layer
layer_conv_2d(filters = 256, kernel_size = 5,
padding = "same") %>%
layer_activation_leaky_relu() %>%
# Upsample to 32x32
layer_conv_2d_transpose(filters = 256, kernel_size = 4,
strides = 2, padding = "same") %>%
layer_activation_leaky_relu() %>%
# Few more conv layers
layer_conv_2d(filters = 256, kernel_size = 5,
padding = "same") %>%
layer_activation_leaky_relu() %>%
layer_conv_2d(filters = 256, kernel_size = 5,
padding = "same") %>%
layer_activation_leaky_relu() %>%
# Produce a 32x32 1-channel feature map
layer_conv_2d(filters = channels, kernel_size = 7,
activation = "tanh", padding = "same")
generator <- keras_model(generator_input, generator_output)
summary(generator)
###Output
_____no_output_____
###Markdown
The discriminator Then, we develop a `discriminator` model, that takes as input a candidate image (real or synthetic) and classifies it into one of two classes, either "generated image" or "real image that comes from the training set".
###Code
discriminator_input <- layer_input(shape = c(height, width, channels))
discriminator_output <- discriminator_input %>%
layer_conv_2d(filters = 128, kernel_size = 3) %>%
layer_activation_leaky_relu() %>%
layer_conv_2d(filters = 128, kernel_size = 4, strides = 2) %>%
layer_activation_leaky_relu() %>%
layer_conv_2d(filters = 128, kernel_size = 4, strides = 2) %>%
layer_activation_leaky_relu() %>%
layer_conv_2d(filters = 128, kernel_size = 4, strides = 2) %>%
layer_activation_leaky_relu() %>%
layer_flatten() %>%
# One dropout layer - important trick!
layer_dropout(rate = 0.4) %>%
# Classification layer
layer_dense(units = 1, activation = "sigmoid")
discriminator <- keras_model(discriminator_input, discriminator_output)
summary(discriminator)
# To stabilize training, we use learning rate decay
# and gradient clipping (by value) in the optimizer.
discriminator_optimizer <- optimizer_rmsprop(
lr = 0.0008,
clipvalue = 1.0,
decay = 1e-8
)
discriminator %>% compile(
optimizer = discriminator_optimizer,
loss = "binary_crossentropy"
)
###Output
_____no_output_____
###Markdown
The adversarial networkFinally, we setup the GAN, which chains the generator and the discriminator. This is the model that, when trained, will move the generator in a direction that improves its ability to fool the discriminator. This model turns latent space points into a classification decision, "fake" or "real", and it is meant to be trained with labels that are always "these are real images". So training `gan` will updates the weights of `generator` in a way that makes `discriminator` more likely to predict "real" when looking at fake images. Very importantly, we set the discriminator to be frozen during training (non-trainable): its weights will not be updated when training `gan`. If the discriminator weights could be updated during this process, then we would be training the discriminator to always predict "real", which is not what we want!
###Code
# Set discriminator weights to non-trainable
# (will only apply to the `gan` model)
freeze_weights(discriminator)
gan_input <- layer_input(shape = c(latent_dim))
gan_output <- discriminator(generator(gan_input))
gan <- keras_model(gan_input, gan_output)
gan_optimizer <- optimizer_rmsprop(
lr = 0.0004,
clipvalue = 1.0,
decay = 1e-8
)
gan %>% compile(
optimizer = gan_optimizer,
loss = "binary_crossentropy"
)
###Output
_____no_output_____
###Markdown
How to train your DCGANNow we can begin training. To recapitulate, this is what the training loop looks like schematically. For each epoch, we do the following:* Draw random points in the latent space (random noise).* Generate images with `generator` using this random noise.* Mix the generated images with real ones.* Train `discriminator` using these mixed images, with corresponding targets: either "real" (for the real images) or "fake" (for the generated images).* Draw new random points in the latent space.* Train `gan` using these random vectors, with targets that all say "these are real images." This updates the weights of the generator (only, because the discriminator is frozen inside `gan`) to move them toward getting the discriminator to predict "these are real images" for generated images: that is, this trains the generator to fool the discriminator.Let's implement it.
###Code
# Loads CIFAR10 data
cifar10 <- dataset_cifar10()
c(c(x_train, y_train), c(x_test, y_test)) %<-% cifar10
# Selects frog images (class 6)
x_train <- x_train[as.integer(y_train) == 6,,,]
# Normalizes data
x_train <- x_train / 255
iterations <- 10000
batch_size <- 20
save_dir <- "gan_images"
dir.create(save_dir)
# Start the training loop
start <- 1
for (step in 1:iterations) {
# Samples random points in the latent space
random_latent_vectors <- matrix(rnorm(batch_size * latent_dim),
nrow = batch_size, ncol = latent_dim)
# Decodes them to fake images
generated_images <- generator %>% predict(random_latent_vectors)
# Combines them with real images
stop <- start + batch_size - 1
real_images <- x_train[start:stop,,,]
rows <- nrow(real_images)
combined_images <- array(0, dim = c(rows * 2, dim(real_images)[-1]))
combined_images[1:rows,,,] <- generated_images
combined_images[(rows+1):(rows*2),,,] <- real_images
# Assembles labels discriminating real from fake images
labels <- rbind(matrix(1, nrow = batch_size, ncol = 1),
matrix(0, nrow = batch_size, ncol = 1))
# Adds random noise to the labels -- an important trick!
labels <- labels + (0.5 * array(runif(prod(dim(labels))),
dim = dim(labels)))
# Trains the discriminator
d_loss <- discriminator %>% train_on_batch(combined_images, labels)
# Samples random points in the latent space
random_latent_vectors <- matrix(rnorm(batch_size * latent_dim),
nrow = batch_size, ncol = latent_dim)
# Assembles labels that say "all real images"
misleading_targets <- array(0, dim = c(batch_size, 1))
# Trains the generator (via the gan model, where the
# discriminator weights are frozen)
a_loss <- gan %>% train_on_batch(
random_latent_vectors,
misleading_targets
)
start <- start + batch_size
if (start > (nrow(x_train) - batch_size))
start <- 1
# Occasionally saves images
if (step %% 100 == 0) {
# Saves model weights
save_model_weights_hdf5(gan, "gan.h5")
# Prints metrics
cat("discriminator loss:", d_loss, "\n")
cat("adversarial loss:", a_loss, "\n")
# Saves one generated image
image_array_save(
generated_images[1,,,] * 255,
path = file.path(save_dir, paste0("generated_frog", step, ".png"))
)
# Saves one real image for comparison
image_array_save(
real_images[1,,,] * 255,
path = file.path(save_dir, paste0("real_frog", step, ".png"))
)
}
}
###Output
_____no_output_____ |
student-notebooks/04.02-Low-Res-Scoring-and-Fragments.ipynb | ###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
--- *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* Low-Res Scoring and FragmentsKeywords: centroid, SwitchResidueTypeSetMover(), create_score_function(), score3, fa_standard, ScoreFunction(), set_weight(), read_fragment_file(), ClassicFragmentMover()
###Code
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.mount_pyrosetta_install()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
from pyrosetta import *
from pyrosetta.teaching import *
init()
###Output
_____no_output_____
###Markdown
**Make sure you are in the directory with the pdb files:**`cd google_drive/My\ Drive/student-notebooks/` Low-Resolution (Centroid) ScoringFollowing the treatment of Simons *et al.* (1999), Rosetta can score a protein conformation using a low-resolution representation. This will make the energy calculation faster.Load chain A of Ras, a protein from a the previous workshop 3. Also calculate the full-atom energy of the pose.```pose = pyrosetta.pose_from_pdb("6Q21_A.pdb")sfxn = pyrosetta.get_score_function()sfxn(pose)```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
**Question:** Print residue 5. Note the number of atoms and coordinates of residue 5.```print(pose.residue(5))```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
SwitchResidueTypeSetMover Now, convert the `pose` to the centroid form by using a `SwitchResidueTypeSetMover` object and the apply method:```switch = SwitchResidueTypeSetMover("centroid")switch.apply(pose)print(pose.residue(5))```**Question:** How many atoms are now in residue 5? How is this different than before switching it into centroid mode?
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Score the new, centroid-based pose by creating and using the standard centroid score function "score3".```cen_sfxn = pyrosetta.create_score_function("score3")cen_sfxn(pose)```**Question:** What is the new total score? What scoring terms are included in "score3" (`print` the `cen_sfxn`)? Do these match Simons?
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Convert the `pose` back to all-atom form by using another switch object, `SwitchResidueTypeSetMover("fa_standard")`.```fa_switch = SwitchResidueTypeSetMover("fa_standard")fa_switch.apply(pose)print(pose.residue(5))```**Question:** Confirm that you have all the atoms back. Are the atoms in the same coordinate position as before?
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Exercise 1: Centroid Folding AlgorithmGo back and adjust your folding algorithm to use centroid mode. Create a `ScoreFunction` that uses only van der Waals (`fa_atr` and `fa_rep`) and `hbond_sr_bb` energy score terms. **Question:** How much faster does your program run?
###Code
polyA = pyrosetta.pose_from_sequence('A' * 10)
polyA.pdb_info().name("polyA")
# Apply the SwitchResidueTypeSetMover to the pose polyA
# YOUR CODE HERE
raise NotImplementedError()
# Create new score function with only VDW and hbond_sr_bb energy score terms.
# YOUR CODE HERE
raise NotImplementedError()
# Use the basic_folding function in the previous chapter,
# overwrite your scoring subroutine, and run the program.
###Output
_____no_output_____
###Markdown
Note about `Movers`Not counting the `PyMOLMover`, which is a special case, `SwitchResidueTypeSetMover` is the first example we have seen of a `Mover` class in PyRosetta. Every `Mover` object in PyRosetta has been designed to apply specific and complex changes (or “moves”) to a `pose`. Every `Mover` must be “constructed” and have any options set before being applied to a `pose` with the `apply()` method. `SwitchResidueTypeSetMover` has a relatively simple construction with only the single option `"centroid"`. (Some `Movers`, as we shall see, require no options and are programmed to operate with default values). Protein FragmentsLook at the provided `3mer.frags` fragments. These fragments are generated from the Robetta server (http://robetta.bakerlab.org/fragmentsubmit.jsp) for a given sequence. You should see sets of three-lines describing each fragment.**Questions:** For the first fragment, which PDB file does it come from? Is this fragment helical, sheet, in a loop, or a combination? What are the φ, ψ, and ω angles of the middle residue of the first fragment window? Create a new subroutine in your folding code for an alternate random move based upon a “fragment insertion”. A fragment insertion is the replacement of the torsion angles for a set of consecutive residues with new torsion angles pulled at random from a fragment library file. Prior to calling the subroutine, load the set of fragments from the fragment file:```from pyrosetta.rosetta.core.fragment import *fragset = ConstantLengthFragSet(3)fragset.read_fragment_file("3mer.frags")```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Using FragmentMover and MoveMap Next, we will construct another `Mover` object — this time a `FragmentMover` — using the above fragment set and a `MoveMap` object as options. A `MoveMap` specifies which degrees of freedom are allowed to change in the `pose` when the `Mover` is applied (in this case, all backbone torsion angles):```from pyrosetta.rosetta.protocols.simple_moves import ClassicFragmentMovermovemap = MoveMap()movemap.set_bb(True)mover_3mer = ClassicFragmentMover(fragset, movemap)```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Note that when a MoveMap is constructed, all degrees of freedom are set to False initially. If you still have a *PyMOL_Mover* instantiated, you can quickly visualize which degrees of freedom will be allowed by sending your move map to PyMOL with ```test_pose = pyrosetta.pose_from_sequence("RFPMMSTFKVLLCGAVLSRIDAG")pmm.apply(test_pose)pmm.send_movemap(test_pose, movemap)```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Each time this mover is applied, it will select a random 3-mer window and insert only the backbone torsion angles from a random matching fragment in the fragment set. Here is an example using the above `test_pose`:```mover_3mer.apply(test_pose)pmm.apply(test_pose)```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
--- *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* Low-Res Scoring and FragmentsKeywords: centroid, SwitchResidueTypeSetMover(), create_score_function(), score3, fa_standard, ScoreFunction(), set_weight(), read_fragment_file(), ClassicFragmentMover()
###Code
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.setup()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
from pyrosetta import *
from pyrosetta.teaching import *
init()
###Output
_____no_output_____
###Markdown
**Make sure you are in the directory with the pdb files:**`cd google_drive/My\ Drive/student-notebooks/` Low-Resolution (Centroid) ScoringFollowing the treatment of Simons *et al.* (1999), Rosetta can score a protein conformation using a low-resolution representation. This will make the energy calculation faster.Load chain A of Ras, a protein from a the previous workshop 3. Also calculate the full-atom energy of the pose.```pose = pyrosetta.pose_from_pdb("6Q21_A.pdb")sfxn = pyrosetta.get_score_function()sfxn(pose)```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
**Question:** Print residue 5. Note the number of atoms and coordinates of residue 5.```print(pose.residue(5))```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
SwitchResidueTypeSetMover Now, convert the `pose` to the centroid form by using a `SwitchResidueTypeSetMover` object and the apply method:```switch = SwitchResidueTypeSetMover("centroid")switch.apply(pose)print(pose.residue(5))```**Question:** How many atoms are now in residue 5? How is this different than before switching it into centroid mode?
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Score the new, centroid-based pose by creating and using the standard centroid score function "score3".```cen_sfxn = pyrosetta.create_score_function("score3")cen_sfxn(pose)```**Question:** What is the new total score? What scoring terms are included in "score3" (`print` the `cen_sfxn`)? Do these match Simons?
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Convert the `pose` back to all-atom form by using another switch object, `SwitchResidueTypeSetMover("fa_standard")`.```fa_switch = SwitchResidueTypeSetMover("fa_standard")fa_switch.apply(pose)print(pose.residue(5))```**Question:** Confirm that you have all the atoms back. Are the atoms in the same coordinate position as before?
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Exercise 1: Centroid Folding AlgorithmGo back and adjust your folding algorithm to use centroid mode. Create a `ScoreFunction` that uses only van der Waals (`fa_atr` and `fa_rep`) and `hbond_sr_bb` energy score terms. **Question:** How much faster does your program run?
###Code
polyA = pyrosetta.pose_from_sequence('A' * 10)
polyA.pdb_info().name("polyA")
# Apply the SwitchResidueTypeSetMover to the pose polyA
# YOUR CODE HERE
raise NotImplementedError()
# Create new score function with only VDW and hbond_sr_bb energy score terms.
# YOUR CODE HERE
raise NotImplementedError()
# Use the basic_folding function in the previous chapter,
# overwrite your scoring subroutine, and run the program.
###Output
_____no_output_____
###Markdown
Note about `Movers`Not counting the `PyMOLMover`, which is a special case, `SwitchResidueTypeSetMover` is the first example we have seen of a `Mover` class in PyRosetta. Every `Mover` object in PyRosetta has been designed to apply specific and complex changes (or “moves”) to a `pose`. Every `Mover` must be “constructed” and have any options set before being applied to a `pose` with the `apply()` method. `SwitchResidueTypeSetMover` has a relatively simple construction with only the single option `"centroid"`. (Some `Movers`, as we shall see, require no options and are programmed to operate with default values). Protein FragmentsLook at the provided `3mer.frags` fragments. These fragments are generated from the Robetta server (http://robetta.bakerlab.org/fragmentsubmit.jsp) for a given sequence. You should see sets of three-lines describing each fragment.**Questions:** For the first fragment, which PDB file does it come from? Is this fragment helical, sheet, in a loop, or a combination? What are the φ, ψ, and ω angles of the middle residue of the first fragment window? Create a new subroutine in your folding code for an alternate random move based upon a “fragment insertion”. A fragment insertion is the replacement of the torsion angles for a set of consecutive residues with new torsion angles pulled at random from a fragment library file. Prior to calling the subroutine, load the set of fragments from the fragment file:```from pyrosetta.rosetta.core.fragment import *fragset = ConstantLengthFragSet(3)fragset.read_fragment_file("3mer.frags")```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Using FragmentMover and MoveMap Next, we will construct another `Mover` object — this time a `FragmentMover` — using the above fragment set and a `MoveMap` object as options. A `MoveMap` specifies which degrees of freedom are allowed to change in the `pose` when the `Mover` is applied (in this case, all backbone torsion angles):```from pyrosetta.rosetta.protocols.simple_moves import ClassicFragmentMovermovemap = MoveMap()movemap.set_bb(True)mover_3mer = ClassicFragmentMover(fragset, movemap)```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Note that when a MoveMap is constructed, all degrees of freedom are set to False initially. If you still have a *PyMOL_Mover* instantiated, you can quickly visualize which degrees of freedom will be allowed by sending your move map to PyMOL with ```test_pose = pyrosetta.pose_from_sequence("RFPMMSTFKVLLCGAVLSRIDAG")pmm.apply(test_pose)pmm.send_movemap(test_pose, movemap)```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Each time this mover is applied, it will select a random 3-mer window and insert only the backbone torsion angles from a random matching fragment in the fragment set. Here is an example using the above `test_pose`:```mover_3mer.apply(test_pose)pmm.apply(test_pose)```
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____ |
d2l-en/tensorflow/chapter_convolutional-modern/batch-norm.ipynb | ###Markdown
Batch Normalization:label:`sec_batch_norm`Training deep neural networks is difficult.And getting them to converge in a reasonable amount of time can be tricky.In this section, we describe *batch normalization*, a popular and effective techniquethat consistently accelerates the convergence of deep networks :cite:`Ioffe.Szegedy.2015`.Together with residual blocks---covered later in :numref:`sec_resnet`---batch normalizationhas made it possible for practitionersto routinely train networks with over 100 layers. Training Deep NetworksTo motivate batch normalization, let us reviewa few practical challenges that arisewhen training machine learning models and neural networks in particular.First, choices regarding data preprocessing often make an enormous difference in the final results.Recall our application of MLPs to predicting house prices (:numref:`sec_kaggle_house`).Our first step when working with real datawas to standardize our input featuresto each have a mean of zero and variance of one.Intuitively, this standardization plays nicely with our optimizersbecause it puts the parameters *a priori* at a similar scale. Second, for a typical MLP or CNN, as we train,the variables (e.g., affine transformation outputs in MLP)in intermediate layers may take values with widely varying magnitudes:both along the layers from the input to the output, across units in the same layer,and over time due to our updates to the model parameters.The inventors of batch normalization postulated informallythat this drift in the distribution of such variables could hamper the convergence of the network.Intuitively, we might conjecture that if onelayer has variable values that are 100 times that of another layer,this might necessitate compensatory adjustments in the learning rates. Third, deeper networks are complex and easily capable of overfitting.This means that regularization becomes more critical.Batch normalization is applied to individual layers(optionally, to all of them) and works as follows:In each training iteration,we first normalize the inputs (of batch normalization)by subtracting their mean anddividing by their standard deviation,where both are estimated based on the statistics of the current minibatch.Next, we apply a scale coefficient and a scale offset.It is precisely due to this *normalization* based on *batch* statisticsthat *batch normalization* derives its name.Note that if we tried to apply batch normalization with minibatches of size 1,we would not be able to learn anything.That is because after subtracting the means,each hidden unit would take value 0!As you might guess, since we are devoting a whole section to batch normalization,with large enough minibatches, the approach proves effective and stable.One takeaway here is that when applying batch normalization,the choice of batch size may beeven more significant than without batch normalization.Formally, denoting by $\mathbf{x} \in \mathcal{B}$ an input to batch normalization ($\mathrm{BN}$)that is from a minibatch $\mathcal{B}$,batch normalization transforms $\mathbf{x}$according to the following expression:$$\mathrm{BN}(\mathbf{x}) = \boldsymbol{\gamma} \odot \frac{\mathbf{x} - \hat{\boldsymbol{\mu}}_\mathcal{B}}{\hat{\boldsymbol{\sigma}}_\mathcal{B}} + \boldsymbol{\beta}.$$:eqlabel:`eq_batchnorm`In :eqref:`eq_batchnorm`,$\hat{\boldsymbol{\mu}}_\mathcal{B}$ is the sample meanand $\hat{\boldsymbol{\sigma}}_\mathcal{B}$ is the sample standard deviation of the minibatch $\mathcal{B}$.After applying standardization,the resulting minibatchhas zero mean and unit variance.Because the choice of unit variance(vs. some other magic number) is an arbitrary choice,we commonly include elementwise*scale parameter* $\boldsymbol{\gamma}$ and *shift parameter* $\boldsymbol{\beta}$that have the same shape as $\mathbf{x}$.Note that $\boldsymbol{\gamma}$ and $\boldsymbol{\beta}$ are parameters that need to be learned jointly with the other model parameters.Consequently, the variable magnitudesfor intermediate layers cannot diverge during trainingbecause batch normalization actively centers and rescales them backto a given mean and size (via $\hat{\boldsymbol{\mu}}_\mathcal{B}$ and ${\hat{\boldsymbol{\sigma}}_\mathcal{B}}$).One piece of practitioner's intuition or wisdomis that batch normalization seems to allow for more aggressive learning rates.Formally, we calculate $\hat{\boldsymbol{\mu}}_\mathcal{B}$ and ${\hat{\boldsymbol{\sigma}}_\mathcal{B}}$ in :eqref:`eq_batchnorm` as follows:$$\begin{aligned} \hat{\boldsymbol{\mu}}_\mathcal{B} &= \frac{1}{|\mathcal{B}|} \sum_{\mathbf{x} \in \mathcal{B}} \mathbf{x},\\\hat{\boldsymbol{\sigma}}_\mathcal{B}^2 &= \frac{1}{|\mathcal{B}|} \sum_{\mathbf{x} \in \mathcal{B}} (\mathbf{x} - \hat{\boldsymbol{\mu}}_{\mathcal{B}})^2 + \epsilon.\end{aligned}$$Note that we add a small constant $\epsilon > 0$to the variance estimateto ensure that we never attempt division by zero,even in cases where the empirical variance estimate might vanish.The estimates $\hat{\boldsymbol{\mu}}_\mathcal{B}$ and ${\hat{\boldsymbol{\sigma}}_\mathcal{B}}$ counteract the scaling issueby using noisy estimates of mean and variance.You might think that this noisiness should be a problem.As it turns out, this is actually beneficial.This turns out to be a recurring theme in deep learning.For reasons that are not yet well-characterized theoretically,various sources of noise in optimizationoften lead to faster training and less overfitting:this variation appears to act as a form of regularization.In some preliminary research,:cite:`Teye.Azizpour.Smith.2018` and :cite:`Luo.Wang.Shao.ea.2018`relate the properties of batch normalization to Bayesian priors and penalties respectively.In particular, this sheds some light on the puzzleof why batch normalization works best for moderate minibatches sizes in the $50 \sim 100$ range.Fixing a trained model, you might thinkthat we would prefer using the entire datasetto estimate the mean and variance.Once training is complete, why would we wantthe same image to be classified differently,depending on the batch in which it happens to reside?During training, such exact calculation is infeasiblebecause the intermediate variablesfor all data exampleschange every time we update our model.However, once the model is trained,we can calculate the means and variancesof each layer's variables based on the entire dataset.Indeed this is standard practice formodels employing batch normalizationand thus batch normalization layers function differentlyin *training mode* (normalizing by minibatch statistics)and in *prediction mode* (normalizing by dataset statistics).We are now ready to take a look at how batch normalization works in practice. Batch Normalization LayersBatch normalization implementations for fully-connected layersand convolutional layers are slightly different.We discuss both cases below.Recall that one key differences between batch normalization and other layersis that because batch normalization operates on a full minibatch at a time,we cannot just ignore the batch dimensionas we did before when introducing other layers. Fully-Connected LayersWhen applying batch normalization to fully-connected layers,the original paper inserts batch normalization after the affine transformationand before the nonlinear activation function (later applications may insert batch normalization right after activation functions) :cite:`Ioffe.Szegedy.2015`.Denoting the input to the fully-connected layer by $\mathbf{x}$,the affine transformationby $\mathbf{W}\mathbf{x} + \mathbf{b}$ (with the weight parameter $\mathbf{W}$ and the bias parameter $\mathbf{b}$),and the activation function by $\phi$,we can express the computation of a batch-normalization-enabled,fully-connected layer output $\mathbf{h}$ as follows:$$\mathbf{h} = \phi(\mathrm{BN}(\mathbf{W}\mathbf{x} + \mathbf{b}) ).$$Recall that mean and variance are computedon the *same* minibatch on which the transformation is applied. Convolutional LayersSimilarly, with convolutional layers,we can apply batch normalization after the convolutionand before the nonlinear activation function.When the convolution has multiple output channels,we need to carry out batch normalizationfor *each* of the outputs of these channels,and each channel has its own scale and shift parameters,both of which are scalars.Assume that our minibatches contain $m$ examplesand that for each channel,the output of the convolution has height $p$ and width $q$.For convolutional layers, we carry out each batch normalizationover the $m \cdot p \cdot q$ elements per output channel simultaneously.Thus, we collect the values over all spatial locationswhen computing the mean and varianceand consequently apply the same mean and variancewithin a given channelto normalize the value at each spatial location. Batch Normalization During PredictionAs we mentioned earlier, batch normalization typically behaves differentlyin training mode and prediction mode.First, the noise in the sample mean and the sample variancearising from estimating each on minibatchesare no longer desirable once we have trained the model.Second, we might not have the luxuryof computing per-batch normalization statistics.For example,we might need to apply our model to make one prediction at a time.Typically, after training, we use the entire datasetto compute stable estimates of the variable statisticsand then fix them at prediction time.Consequently, batch normalization behaves differently during training and at test time.Recall that dropout also exhibits this characteristic. Implementation from ScratchBelow, we implement a batch normalization layer with tensors from scratch.
###Code
from d2l import tensorflow as d2l
import tensorflow as tf
def batch_norm(X, gamma, beta, moving_mean, moving_var, eps):
# Compute reciprocal of square root of the moving variance elementwise
inv = tf.cast(tf.math.rsqrt(moving_var + eps), X.dtype)
# Scale and shift
inv *= gamma
Y = X * inv + (beta - moving_mean * inv)
return Y
###Output
_____no_output_____
###Markdown
We can now create a proper `BatchNorm` layer.Our layer will maintain proper parametersfor scale `gamma` and shift `beta`,both of which will be updated in the course of training.Additionally, our layer will maintainmoving averages of the means and variancesfor subsequent use during model prediction.Putting aside the algorithmic details,note the design pattern underlying our implementation of the layer.Typically, we define the mathematics in a separate function, say `batch_norm`.We then integrate this functionality into a custom layer,whose code mostly addresses bookkeeping matters,such as moving data to the right device context,allocating and initializing any required variables,keeping track of moving averages (here for mean and variance), and so on.This pattern enables a clean separation of mathematics from boilerplate code.Also note that for the sake of conveniencewe did not worry about automatically inferring the input shape here,thus we need to specify the number of features throughout.Do not worry, the high-level batch normalization APIs in the deep learning framework will care of this for us and we will demonstrate that later.
###Code
class BatchNorm(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(BatchNorm, self).__init__(**kwargs)
def build(self, input_shape):
weight_shape = [input_shape[-1], ]
# The scale parameter and the shift parameter (model parameters) are
# initialized to 1 and 0, respectively
self.gamma = self.add_weight(name='gamma', shape=weight_shape,
initializer=tf.initializers.ones, trainable=True)
self.beta = self.add_weight(name='beta', shape=weight_shape,
initializer=tf.initializers.zeros, trainable=True)
# The variables that are not model parameters are initialized to 0
self.moving_mean = self.add_weight(name='moving_mean',
shape=weight_shape, initializer=tf.initializers.zeros,
trainable=False)
self.moving_variance = self.add_weight(name='moving_variance',
shape=weight_shape, initializer=tf.initializers.ones,
trainable=False)
super(BatchNorm, self).build(input_shape)
def assign_moving_average(self, variable, value):
momentum = 0.9
delta = variable * momentum + value * (1 - momentum)
return variable.assign(delta)
@tf.function
def call(self, inputs, training):
if training:
axes = list(range(len(inputs.shape) - 1))
batch_mean = tf.reduce_mean(inputs, axes, keepdims=True)
batch_variance = tf.reduce_mean(tf.math.squared_difference(
inputs, tf.stop_gradient(batch_mean)), axes, keepdims=True)
batch_mean = tf.squeeze(batch_mean, axes)
batch_variance = tf.squeeze(batch_variance, axes)
mean_update = self.assign_moving_average(
self.moving_mean, batch_mean)
variance_update = self.assign_moving_average(
self.moving_variance, batch_variance)
self.add_update(mean_update)
self.add_update(variance_update)
mean, variance = batch_mean, batch_variance
else:
mean, variance = self.moving_mean, self.moving_variance
output = batch_norm(inputs, moving_mean=mean, moving_var=variance,
beta=self.beta, gamma=self.gamma, eps=1e-5)
return output
###Output
_____no_output_____
###Markdown
Applying Batch Normalization in LeNetTo see how to apply `BatchNorm` in context,below we apply it to a traditional LeNet model (:numref:`sec_lenet`).Recall that batch normalization is appliedafter the convolutional layers or fully-connected layersbut before the corresponding activation functions.
###Code
# Recall that this has to be a function that will be passed to `d2l.train_ch6`
# so that model building or compiling need to be within `strategy.scope()` in
# order to utilize the CPU/GPU devices that we have
def net():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=6, kernel_size=5,
input_shape=(28, 28, 1)),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.MaxPool2D(pool_size=2, strides=2),
tf.keras.layers.Conv2D(filters=16, kernel_size=5),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.MaxPool2D(pool_size=2, strides=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.Dense(84),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.Dense(10)]
)
###Output
_____no_output_____
###Markdown
As before, we will train our network on the Fashion-MNIST dataset.This code is virtually identical to that when we first trained LeNet (:numref:`sec_lenet`).The main difference is the considerably larger learning rate.
###Code
lr, num_epochs, batch_size = 1.0, 10, 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
net = d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr)
###Output
loss 0.243, train acc 0.911, test acc 0.868
38529.9 examples/sec on /GPU:0
###Markdown
Let us have a look at the scale parameter `gamma`and the shift parameter `beta` learnedfrom the first batch normalization layer.
###Code
tf.reshape(net.layers[1].gamma, (-1,)), tf.reshape(net.layers[1].beta, (-1,))
###Output
_____no_output_____
###Markdown
Concise ImplementationCompared with the `BatchNorm` class,which we just defined ourselves,we can use the `BatchNorm` class defined in high-level APIs from the deep learning framework directly.The code looks virtually identicalto the application our implementation above.
###Code
def net():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=6, kernel_size=5,
input_shape=(28, 28, 1)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.MaxPool2D(pool_size=2, strides=2),
tf.keras.layers.Conv2D(filters=16, kernel_size=5),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.MaxPool2D(pool_size=2, strides=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.Dense(84),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.Dense(10),
])
###Output
_____no_output_____
###Markdown
Below, we use the same hyperparameters to train our model.Note that as usual, the high-level API variant runs much fasterbecause its code has been compiled to C++ or CUDAwhile our custom implementation must be interpreted by Python.
###Code
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr)
###Output
loss 0.249, train acc 0.909, test acc 0.875
55631.4 examples/sec on /GPU:0
|
Evaluation.ipynb | ###Markdown
Evaluation notebookYou can run this code in parallel to the Syne Tune parameter search happening. You can find your experiment name in the first few logs returned by Syne Tune
###Code
from matplotlib import pyplot as plt
plt.style.use("seaborn-pastel") # set style
plt.rcParams["figure.figsize"] = [15, 5]
plt.rcParams["font.size"] = 12
from syne_tune.experiments import load_experiment
tuning_experiment = load_experiment("<enter your experiment here>")
# metric over time
tuning_experiment.plot()
df = tuning_experiment.results
columns = ['avg_download_time',
'config_max_concurrency',
'config_max_io_queue',
'config_io_chunksize',
'config_multipart_chunksize']
# 10 latest trials, sorted by time
df[['trial_id', 'st_tuner_time'] + columns].sort_values(by='trial_id', ascending=True)#.tail(n=10)
# 10 best trials, sorted by performance
df[['trial_id', 'st_tuner_time'] + columns].sort_values(by='avg_download_time', ascending=True).head(n=10)
import pandas as pd
# check which areas of the configuration space have been explored
pd.plotting.scatter_matrix(
frame=df[columns],
figsize=(20,15),
diagonal='kde',
s=100,
c='blue',
alpha=0.3)
###Output
_____no_output_____
###Markdown
Define a new import function as preprocessing isn't in this notebook
###Code
def read_data_span(filename):
"""Reads csv file with python, text."""
data = []
with open(filename) as csvfile:
reader = csv.DictReader(csvfile)
count = 0
for row in reader:
if row['span'] == '[]' or row['span'] == []:
data.append([])
else:
data.append([int(j) for j in row['span'][1:-1].split(", ")])
csvfile.close()
return data
###Output
_____no_output_____
###Markdown
Read the test text and labels
###Code
texts = read_text_data('data/tsd_test_readable.csv')
spans = read_data_span('data/tsd_test_readable.csv')
##ADDED FOR PRESENTATION##
def add_to_dict(filename):
with open(filename) as read_list:
reader = csv.reader(read_list)
for row in reader:
toxic_dictionary[row[0]] = 1
read_list.close()
###Output
_____no_output_____
###Markdown
Inspect the text
###Code
texts
###Output
_____no_output_____
###Markdown
Inspect the spans
###Code
spans
###Output
_____no_output_____
###Markdown
Load the test data into a numpy array
###Code
test_X = np.zeros(shape=(len(texts), 1024, 50))
c2v_model = chars2vec.load_model('eng_50')
for x, string in enumerate(texts):
for y, char in enumerate(string):
char_vect = c2v_model.vectorize_words([char])
test_X[x][y] = [word_vect for word_vect in char_vect[0]]
example = np.zeros(shape=(1, 1024, 50))
test_string = ["You wanker, you fucking egit I hope you die you a**hole"]
for x, string in enumerate(test_string):
for y, char in enumerate(string):
char_vect = c2v_model.vectorize_words([char])
example[x][y] = [word_vect for word_vect in char_vect[0]]
model = models.load_model(f"DeconvNet_model_300_epochs")
y_pred = model.predict(test_X)
def fix_word_boundaries(span, text):
# "You fucking Moron you silly cunt" [6,7,8,9,10,11,12,13,14,15,16,28,29,30]
# [4,5,6,7,8,9,10,11,12,13,14,15,16,28,29,30,31]
seperated_text = []
word = ''
new_span = []
current_word_span = []
toxic_word = False
for n, char in enumerate(text):
if n in span:
toxic_word = True
if char == ' ':
seperated_text.append(word)
seperated_text.append(' ')
word = ''
if toxic_word:
new_span.extend(current_word_span)
current_word_span = []
toxic_word = False
else:
current_word_span = []
toxic_word = False
else:
word += char
current_word_span.append(n)
if n == len(text) - 1:
seperated_text.append(word)
if toxic_word:
new_span.extend(current_word_span)
return new_span
scores = []
for x, pred in enumerate(y_pred):
y_pred_f1_compatible = [j for j, i in enumerate(pred) if np.argmax(i) == 0]
#y_pred_f1_compatible = fix_word_boundaries(y_pred_f1_compatible, texts[x])
y_true_f1_compatible = spans[x]
score = f1(y_pred_f1_compatible, y_true_f1_compatible)
scores.append(score)
print('avg F1 %g' % statistics.mean(scores))
test_pred = model.predict(example)
for x, pred in enumerate(test_pred):
char_arr = [j for j, i in enumerate(test_pred[0]) if np.argmax(i) == 0]
print(f"text: {test_string[x]}")
print(f"Predicted span: {char_arr}")
print(f"Flagged text: {test_string[x][char_arr[0]:char_arr[-1]]}")
##ADDED FOR PRESENTATION##
import csv
import pandas as pd
with open("data/tsd_train_readable.csv", encoding="utf-8", errors="ignore") as csv_file:
with open("data/tsd_train_ground_truth_words.csv", "w", newline="") as out_file:
reader = csv.reader(csv_file)
writer = csv.writer(out_file)
for row in reader:
invalid_span = False
complete_span = False
string_vector = ""
phrases = []
if row[0] == "[]":
offset_vector = []
string_vector = row[1]
complete_span = True
else:
offset_vector = row[0][1:-1].split(", ")
offset_vector_int = []
for item in offset_vector:
if item.isnumeric():
offset_vector_int.append(int(item))
else:
invalid_span = True
break
offset_vector = offset_vector_int
for string_vector_index, char_index in enumerate(offset_vector):
if invalid_span:
break
if complete_span:
break
if char_index > len(row[1])-1:
break
if string_vector_index == 0:
string_vector = string_vector + row[1][int(char_index)]
else:
if int(char_index) != offset_vector[string_vector_index-1] + 1:
if len(string_vector.split(" ")) < 2:
phrases.append(string_vector)
string_vector = ""
string_vector = string_vector + row[1][int(char_index)]
else:
string_vector = string_vector + row[1][int(char_index)]
if len(string_vector.split(" ")) < 2:
phrases.append(string_vector)
if complete_span == False and string_vector != "" and phrases != []:
for phrase in phrases:
print(phrase)
writer.writerow([phrase])
csv_file.close()
out_file.close()
##ADDED FOR PRESENTATION##
toxic_dictionary = dict({})
add_to_dict("data/tsd_train_ground_truth_words.csv")
test = read_text_data("data/tsd_test_readable.csv")
##ADDED FOR PRESENTATION##
scores = []
for x, text in enumerate(test):
predict_spans = []
for phrase in toxic_dictionary.keys():
if text.find(phrase) != -1:
predict_spans.extend(range(text.find(phrase), text.find(phrase) + len(phrase)))
break
predict_spans = list(set(predict_spans))
#ensemble = set(predict_spans).intersection([j for j, i in enumerate(y_pred[x]) if np.argmax(i) == 0])
#score = f1(ensemble, spans[x])
#uncomment the comment out comment below to test ensemble
score = f1(predict_spans, spans[x])
scores.append(score)
print('avg F1 %g' % statistics.mean(scores))
###Output
_____no_output_____
###Markdown
Open all models Predictions
###Code
predictions, shareindex, sharecolumns = {}, None, None
for model in ['regression_0', 'bayes_sir','sir_0', 'ihme']:
predictions[model] = {pd.to_datetime(f[3:11]): pd.read_csv(os.path.join('results/', model,f), parse_dates = True, index_col = 'date') for f in os.listdir(os.path.join('results/', model)) if state in f}
predictions[model] = pd.DataFrame({d: predictions[model][d]['pred_{}'.format(type_analysis)] for d in sorted(predictions[model]) if 'pred_{}'.format(type_analysis) in predictions[model][d].columns})
if predictions[model].empty:
del predictions[model]
elif shareindex is None:
shareindex = predictions[model].index
sharecolumns = predictions[model].columns
else:
shareindex = shareindex.intersection(predictions[model].index)
sharecolumns = sharecolumns.intersection(predictions[model].columns)
print("Opened for {}: {}".format(type_analysis, ', '.join(predictions.keys())))
ground_truth = pd.read_csv('https://covidtracking.com/api/v1/states/daily.csv', parse_dates=['date'])[['date', 'state', 'positive', 'death']]
ground_truth = ground_truth[ground_truth.state == state]
ground_truth.index = ground_truth.date
if type_analysis == "cases":
ground_truth = ground_truth.sort_index()['positive'].dropna()
elif type_analysis == "deaths":
ground_truth = ground_truth.sort_index()['death'].dropna()
shareindex = shareindex.intersection(ground_truth.index)
ground_truth
###Output
_____no_output_____
###Markdown
Columns are the date used for training, index are the date at which it is evaluated
###Code
predictions['bayes_sir']
predictions['sir_0']
###Output
_____no_output_____
###Markdown
Remove training data
###Code
for model in predictions:
for c in predictions[model].columns:
predictions[model].loc[predictions[model].index <= c, c] = np.nan
predictions[model] = predictions[model].loc[shareindex, sharecolumns].dropna(how='all')
predictions['bayes_sir']
predictions['sir_0']
predictions['regression_0']
###Output
_____no_output_____
###Markdown
Compute difference
###Code
for model in predictions:
for c in predictions[model].columns:
predictions[model][c] -= ground_truth[predictions[model].index]
predictions[model][c] = predictions[model][c].abs()
if percentage:
predictions[model][c] /= ground_truth[predictions[model].index]
predictions[model][c] *= 100
predictions['bayes_sir']
predictions['sir_0']
###Output
_____no_output_____
###Markdown
Relative dataframe Computing with regard to the start date
###Code
predictions_relative = {}
for model in predictions:
predictions_relative[model] = predictions[model].copy()
for c in predictions[model].columns:
predictions_relative[model][c] = predictions_relative[model][c].shift(-predictions_relative[model][c].isnull().sum())
predictions_relative[model].index = np.arange(len(predictions_relative[model]))
predictions_relative['bayes_sir']
predictions_relative['sir_0']
###Output
_____no_output_____
###Markdown
Prediction in x days What is the error if the model tries to predict in x days ?
###Code
plt.title("{} in {}".format(type_analysis, state))
for model in predictions_relative:
std = predictions_relative[model].std(axis = 1)
mean = predictions_relative[model].mean(axis = 1)
interval = 1.96 * std / np.sqrt(predictions_relative[model].notna().sum(axis = 1))
ax = mean.plot(label = model, color = colors[model])
plt.fill_between(mean.index, mean + interval, mean - interval, color = ax.get_lines()[-1].get_color(), alpha=.1)
plt.xlabel("Horizon after training (in days)")
plt.ylabel("Absolute error ({})".format("%" if percentage else "number cases"))
plt.yscale('log')
plt.legend()
plt.show()
for model in predictions_relative:
plt.title("{} : {} in {}".format(model, type_analysis, state))
for i in [1, 3, 7]:
data = predictions_relative[model].loc[i - 1].copy()
data.index += i
data.plot(label = 'In {} days'.format(i))
plt.xlabel("Horizon after training (in days)")
plt.ylabel("Absolute error ({})".format("%" if percentage else "number cases"))
plt.yscale('log')
plt.legend()
plt.show()
###Output
/home/vincent/.local/lib/python3.7/site-packages/pandas/core/arrays/datetimelike.py:1219: FutureWarning: Addition/subtraction of integers and integer-arrays to DatetimeArray is deprecated, will be removed in a future version. Instead of adding/subtracting `n`, use `n * self.freq`
maybe_integer_op_deprecated(self)
/home/vincent/.local/lib/python3.7/site-packages/pandas/core/arrays/datetimelike.py:1219: FutureWarning: Addition/subtraction of integers and integer-arrays to DatetimeArray is deprecated, will be removed in a future version. Instead of adding/subtracting `n`, use `n * self.freq`
maybe_integer_op_deprecated(self)
/home/vincent/.local/lib/python3.7/site-packages/pandas/core/arrays/datetimelike.py:1219: FutureWarning: Addition/subtraction of integers and integer-arrays to DatetimeArray is deprecated, will be removed in a future version. Instead of adding/subtracting `n`, use `n * self.freq`
maybe_integer_op_deprecated(self)
###Markdown
Prediction for x days What is the average error for the following x days ?
###Code
plt.title("{} in {}".format(type_analysis, state))
for model in predictions_relative:
mean_matrix = predictions_relative[model].rolling(len(predictions_relative[model]), min_periods=1).mean()
mean_matrix[predictions_relative[model].isnull()] = np.nan
std = mean_matrix.std(axis = 1)
mean = mean_matrix.mean(axis = 1)
interval = 1.96 * std / np.sqrt(mean_matrix.notna().sum(axis = 1))
ax = mean.plot(label = model, color = colors[model])
plt.fill_between(mean.index, mean + interval, mean - interval, color = ax.get_lines()[-1].get_color(), alpha=.1)
plt.xlabel("Horizon after training")
plt.ylabel("Absolute error ({})".format("%" if percentage else "number cases"))
plt.yscale('log')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Prediction for given date What is the predictions that my model does for a given date ? Large variance indicates that the model changed
###Code
plt.title("{} in {}".format(type_analysis, state))
for model in predictions:
std = predictions[model].std(axis = 1)
mean = predictions[model].mean(axis = 1)
interval = 1.96 * std / np.sqrt(predictions[model].notna().sum(axis = 1))
ax = mean.plot(label = model, color = colors[model])
plt.fill_between(mean.index, mean + interval, mean - interval, color = ax.get_lines()[-1].get_color(), alpha=.1)
plt.xlabel("Date")
plt.ylabel("Absolute error ({})".format("%" if percentage else "number cases"))
plt.yscale('log')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Prediction until given date What is the average error until a given date ?
###Code
plt.title("{} in {}".format(type_analysis, state))
for model in predictions:
mean_matrix = predictions[model].rolling(len(predictions[model]), min_periods=1).mean()
mean_matrix[predictions[model].isnull()] = np.nan
std = mean_matrix.std(axis = 1)
mean = mean_matrix.mean(axis = 1)
interval = 1.96 * std / np.sqrt(mean_matrix.notna().sum(axis = 1))
ax = mean.plot(label = model, color = colors[model])
plt.fill_between(mean.index, mean + interval, mean - interval, color = ax.get_lines()[-1].get_color(), alpha=.1)
plt.xlabel("Date")
plt.ylabel("Absolute error ({})".format("%" if percentage else "number cases"))
plt.yscale('log')
if percentage:
plt.ylim(-0.1, 30)
plt.legend()
plt.show()
###Output
/home/vincent/miniconda3/lib/python3.7/site-packages/ipykernel_launcher.py:15: UserWarning: Attempted to set non-positive bottom ylim on a log-scaled axis.
Invalid limit will be ignored.
from ipykernel import kernelapp as app
###Markdown
Parameters
###Code
params = {}
for model in ['bayes_sir', 'sir_0', 'sir_100']:
print(model)
params[model] = {pd.to_datetime(f[8:-4]): pd.read_csv(os.path.join('params/', model,f), index_col = 'Parameter')['Value'] for f in os.listdir(os.path.join('params/', model)) if state in f}
params[model] = pd.DataFrame({d: params[model][d] for d in sorted(params[model])}).T
###Output
bayes_sir
sir_0
sir_100
###Markdown
Gamma
###Code
for model in params:
params[model].gamma.plot(label = model)
plt.xlabel("Training until")
plt.ylabel("Gamma value")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Mortality
###Code
for model in params:
(params[model].DeathProportion * 100).plot(label = model)
plt.xlabel("Training until")
plt.ylabel("Percentage Death (%)")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Strategy Evaluation Simple EvaluationA simple way to evaluate a strategy is to compare it with a benchmark. A strategy is considered better if the strategy 1. returns are greater than the benchmark 2. drawdown is lesser than the benchmark 3. correlation to the benchmark is low 4. the backtesting/simulation period covers atleast 10 years
###Code
import pandas as pd
import numpy as np
import pyfolio as pf
###Output
_____no_output_____
###Markdown
Evaluating Traffic Sign Recognition Pipelines --- This notebook is part of https://github.com/risc-mi/atsd. This notebook demonstrates how detection- and classification trained on ATSD-Scenes and ATSD-Signs can be evaluated, by calculating class-wise average precision and mean average precision (mAP). Package Imports
###Code
from pathlib import Path
import pandas as pd
from util import evaluator
###Output
_____no_output_____
###Markdown
Paths Set `ROOT` to the path to the directory where ATSD-Scenes is located. This is the directory containing folders `"/train"` and `"/test"`.
###Code
ROOT = Path('path/to/atsd-scenes')
###Output
_____no_output_____
###Markdown
Load Ground Truth and Recognition Results Load ground truth:
###Code
annotations = pd.read_csv(ROOT / 'test/meta_test.csv', index_col=0)
###Output
_____no_output_____
###Markdown
Drop categories not used for training the detection models:
###Code
annotations = annotations[~annotations['class_id'].str[:2].isin(('09', 'xx'))]
len(annotations)
annotations.head()
###Output
_____no_output_____
###Markdown
Load recognition results. These can be either detection results, where only the traffic sign category is predicted, or results from the entire recognition pipeline, where the exact class is predicted.In this case, we load results from a detection+classification pipeline trained on the public training set and evaluated on the public test set. The classifier was trained with geometric+LED augmentation enabled.
###Code
recognitions = pd.read_csv('results/1_7.csv', index_col=0)
###Output
_____no_output_____
###Markdown
The traffic sign category predicted by the detector is stored in column `"cat_id"`, the detector's confidence in column `"conf"`, the traffic sign class predicted by the classifier can be found in column `"pred"` and the classifier's softmax score in column `"pred_score"`. Furthermore, every row is assigned a unique `"detection_id"`, similar to `"annotation_id"` in the ground-truth annotations:
###Code
recognitions.head()
###Output
_____no_output_____
###Markdown
In this case, bounding boxes are specified in columns `"xtl"`, `"ytl"`, `"xbr"` and `"ybr"`. Alternatively, it is also possible to have a single column `"bbox"` containing string-representations of the bounding boxes in the YOLO-native `(x_center, y_center, width, height)` format. Evaluate Detection Performance Evaluate detection performance, i.e., ignore class predictions and only consider categories. Note that detection- and annotation-IDs must be on the respective row index:
###Code
det_matches, det_metrics = evaluator.evaluate(
recognitions.set_index('detection_id'),
annotations.set_index('annotation_id'),
conf='conf',
pred='cat_id',
iou_threshold=0.5,
conf_threshold=0.25,
discard_disagreements=False,
area_range=None
)
###Output
_____no_output_____
###Markdown
Per-class performance metrics:
###Code
det_metrics
###Output
_____no_output_____
###Markdown
Matched detections and ground truth annotations:
###Code
det_matches.head()
###Output
_____no_output_____
###Markdown
Evaluate Performance of Detection+Classification Pipeline Evaluate detection+classification performance. There are in fact only two differences to the invocation of function `evaluator.evaluate()` in the section above:* `conf` is set to `"pred_score"`, to use the softmax score returned by the classifier as the detection confidence. It could be set to `"conf"` or any combination (product, minimum, maximum, etc.) of the two, but in our experiments `"pred_score"` worked best.* `pred` is set to `"pred"`, which is the predicted traffic sign class.Note that all ground-truth annotations of traffic sign classes not included among the 60 classes in ATSD-Signs are automatically ignored!
###Code
pip_matches, pip_metrics = evaluator.evaluate(
recognitions.set_index('detection_id'),
annotations.set_index('annotation_id'),
conf='pred_score',
pred='pred',
iou_threshold=0.5,
conf_threshold=0.25,
discard_disagreements=False,
area_range=None
)
###Output
_____no_output_____
###Markdown
Per-class performance metrics, sorted descending by average precision:
###Code
pip_metrics.sort_values('AP', ascending=False)
###Output
_____no_output_____
###Markdown
Mean average precision (mAP):
###Code
pip_metrics['AP'].mean()
###Output
_____no_output_____
###Markdown
Machine Learning - Evaluation File This script this script produces the ROC plot, as well as several other performance metrics, including the classifier scores, the log-loss for each classifier, the confusion matrix and the classification report including the f1 score. The f1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0.
###Code
def ROC_plotting(title, y_test, y_score):
'''
This function generates the ROC plot for a given model.
Written by Jakke-Neiro
Last Modified by AndreiRoibu
Args:
title (string): String represending the name of the model.
y_test (ndarray): 1D array of test dataset
y_score (ndarray): 1D array of model-predicted labels
Returns:
ROC Plot
'''
n_classes = 2
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test, y_score)
roc_auc[i] = auc(fpr[i], tpr[i])
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
plt.figure()
lw = 2
plt.plot(fpr[0], tpr[0], color='darkorange', lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[0])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title(title)
plt.legend(loc="lower right")
plt.show()
def model_evaluation(X_train, X_test, y_train, y_test, classifier, predicted_train, predicted_test):
'''
This function prints the results of the different classifiers,a s well as several performance metrics
Written by AndreiRoibu
Args:
X_train (ndarray): 2D array of input dataset used for training
X_test (ndarray): 2D array of input dataset used for testing
y_train (ndarray): 1D array of train labels
y_test (ndarray): 1D array of test labels
classifier: the classifier model
predicted_train (ndarray): 1D array of model-predicted labels for the train dataset
predicted_test (ndarray): 1D array of model-predicted labels for the test dataset
Returns:
ROC Plot
'''
print("Training set score: %f" % classifier.score(X_train, y_train))
print("Training log-loss: %f" % log_loss(X_train, y_train))
print(confusion_matrix(y_train,predicted_train))
print(classification_report(y_train,predicted_train))
print("Test set score: %f" % classifier.score(X_test, y_test))
print("Test log-loss: %f" % log_loss(X_test, y_test))
print(confusion_matrix(y_test,predicted_test))
print(classification_report(y_test,predicted_test))
ROC_plotting("ROC",y_test, predicted_test)
###Output
_____no_output_____
###Markdown
Evaluation 4 1. most popular 1000 Python Github repo2. filter top N frequently asked questions on StackOverflow(17000->2000) (question viewed+votes)3. Verify if the StackOverflow code snippet exist in 1000 repo - ElasticSearch - manually choose 100 questions from ElasticSearch result4. use StackOverflow questions as input of the model, and manually evalute if the top 10 results has correct ansers Automated Evaluation 6 replaced the 4th step of the earlier evaluation methods with:- first taking the top 10 results retrieved by NCS, and for each retrieved method, getting a similarity score between the ground–truth code snippet and the method. - choose a threshold that minimize false positive=>
###Code
from gensim.models import KeyedVectors
from time import time
import numpy as np
import pandas as pd
from sklearn.metrics.pairwise import cosine_similarity
import pickle
# change the path to target files
st=time()
path_wordembedding="data/embeddings.txt"
path_docembedding="data/document_embeddings.csv"
path_stackoverflow="data/stack_overflow/StackOverFlow.csv"
# change hyperparameters
vocab_size=500
window_size=5
#StackOverflow start id
start_idx=0
end_idx=10 #will actually run to end_idx-1
# load StackOverflow data
st=time()
df_stack_overflow=pd.read_csv(path_stackoverflow)
print("Dimension of StackOverflow data: {}".format(df_stack_overflow.shape))
print("Run time: {} s".format(time()-st))
# load wordembedding: representation of words
st=time()
trained_ft_vectors = KeyedVectors.load_word2vec_format(path_wordembedding)
print("Run time: {} s".format(time()-st))
# load document embedding: representation of each source code function
st=time()
document_embeddings=np.loadtxt(fname=path_docembedding, delimiter=",")
print("Dimension of the document embedding: {}".format(document_embeddings.shape))
print("Run time: {} s".format(time()-st))
# normalize a word represenatation vector that its L2 norm is 1.
# we do this so that the cosine similarity reduces to a simple dot product
def normalize(word_representations):
for word in word_representations:
total=0
for key in word_representations[word]:
total+=word_representations[word][key]*word_representations[word][key]
total=math.sqrt(total)
for key in word_representations[word]:
word_representations[word][key]/=total
def dictionary_dot_product(dict1, dict2):
dot=0
for key in dict1:
if key in dict2:
dot+=dict1[key]*dict2[key]
return dot
def find_sim(word_representations, query):
if query not in word_representations:
print("'%s' is not in vocabulary" % query)
return None
scores={}
for word in word_representations:
cosine=dictionary_dot_product(word_representations[query], word_representations[word])
scores[word]=cosine
return scores
# Find the K words with highest cosine similarity to a query in a set of word_representations
def find_nearest_neighbors(word_representations, query, K):
scores=find_sim(word_representations, query)
if scores != None:
sorted_x = sorted(scores.items(), key=operator.itemgetter(1), reverse=True)
for idx, (k, v) in enumerate(sorted_x[:K]):
print("%s\t%s\t%.5f" % (idx,k,v))
def get_most_relevant_document(question, word_embedding, doc_embedding, num=10):
"""Return the functions that are most relevant to the natual language question.
Args:
question: A string. A Question from StackOverflow.
word_embedding: Word embedding generated from codebase.
doc_embedding: Document embedding generated from codebase
num: The number of top similar functions to return.
Returns:
A list of indices of the top NUM related functions to the QUESTION in the WORD_EMBEDDING.
"""
# convert QUESTION to a vector
tokenized_ques=question.split()
vec_ques=np.zeros((1,document_embeddings.shape[1])) #vocab_size
token_count=0
has_token_in_embedding=False
for token in tokenized_ques:
if token in word_embedding:
has_token_in_embedding=True
vec_ques+=word_embedding[token]
token_count+=1
if has_token_in_embedding:
mean_vec_ques=vec_ques/token_count
# compute similarity between this question and each of the source code snippets
cosine_sim=[]
for idx, doc in enumerate(document_embeddings):
#[TODO] fix dimension
try:
cosine_sim.append(cosine_similarity(mean_vec_ques, doc.reshape(1, -1))[0][0])
except ValueError:
print(question)
print(vec_ques, token_count)
print(mean_vec_ques)
print(doc.reshape(1, -1))
# get top `num` similar functions
result_func_id=np.array(cosine_sim).argsort()[-num:][::-1]
result_similarity=np.sort(np.array(cosine_sim))[-num:][::-1]
else:
result_func_id=np.nan
result_similarity=np.nan
return result_func_id, result_similarity
# limit number of questions
df_stack_overflow_partial=df_stack_overflow.iloc[start_idx:end_idx,:]
st=time()
list_most_relevant_doc=[]
list_most_relevant_sim=[]
for idx in range(len(df_stack_overflow_partial)):
question=df_stack_overflow_partial.iloc[idx]["Question_Title"]
most_relevant_doc, most_relevant_sim=get_most_relevant_document(question, trained_ft_vectors, document_embeddings)
list_most_relevant_doc.append(most_relevant_doc)
list_most_relevant_sim.append(most_relevant_sim)
df_stack_overflow_partial["func_id"]=list_most_relevant_doc
df_stack_overflow_partial["sim"]=list_most_relevant_sim
print("Run time: {} s".format(time()-st))
# save result
df_stack_overflow_partial.to_pickle("data/SO_similarity_{}_{}.pkl".format(start_idx, end_idx))
df_stack_overflow_partial
###Output
_____no_output_____
###Markdown
Check result
###Code
df_stack_overflow_partial=pd.read_pickle("data/SO_similarity_0_10.pkl")
df_stack_overflow_partial
df_stack_overflow_partial[df_stack_overflow_partial["Post_Link_ID"]==50607128]["Question_Title"]
df_stack_overflow_partial[df_stack_overflow_partial["Post_Link_ID"]==50607128]["func_id"].tolist()
#df_py100k=pd.read_pickle("data/py100k.pkl")
#df_py100k[700000:700100]
#list_data_id=[]
#for i in [260771, 275794, 428754, 372502, 360950, 284871, 412289, 412286, 11140, 412288]:
# #list_data_id.append(df_py100k.iloc[i]["data_id"])
# print(df_py100k.iloc[i])
# print()
##df_py100k.head()
#list_data_id
###Output
_____no_output_____
###Markdown
Result evaluation
###Code
import numpy as np
import cv2
import os
"""
Confusion matrix
P\L P N
P TP FP
N FN TN
"""
# same def in Train_model
def color_dict(labelFolder, classNum):
colorDict = []
ImageNameList = os.listdir(labelFolder)
for i in range(len(ImageNameList)):
ImagePath = labelFolder + "/" + ImageNameList[i]
img = cv2.imread(ImagePath).astype(np.uint32)
if(len(img.shape) == 2):
img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB).astype(np.uint32)
img_new = img[:,:,0] * 1000000 + img[:,:,1] * 1000 + img[:,:,2]
unique = np.unique(img_new)
for j in range(unique.shape[0]):
colorDict.append(unique[j])
colorDict = sorted(set(colorDict))
if(len(colorDict) == classNum):
break
colorDict_BGR = []
for k in range(len(colorDict)):
color = str(colorDict[k]).rjust(9, '0')
color_BGR = [int(color[0 : 3]), int(color[3 : 6]), int(color[6 : 9])]
colorDict_BGR.append(color_BGR)
colorDict_BGR = np.array(colorDict_BGR)
colorDict_GRAY = colorDict_BGR.reshape((colorDict_BGR.shape[0], 1 ,colorDict_BGR.shape[1])).astype(np.uint8)
colorDict_GRAY = cv2.cvtColor(colorDict_GRAY, cv2.COLOR_BGR2GRAY)
return colorDict_BGR, colorDict_GRAY
def ConfusionMatrix(numClass, imgPredict, Label):
# Return confusion matrix
mask = (Label >= 0) & (Label < numClass)
label = numClass * Label[mask] + imgPredict[mask]
count = np.bincount(label, minlength = numClass**2)
confusionMatrix = count.reshape(numClass, numClass)
return confusionMatrix
def OverallAccuracy(confusionMatrix):
# Return overall accuracy
# acc = (TP + TN) / (TP + TN + FP + TN)
OA = np.diag(confusionMatrix).sum() / confusionMatrix.sum()
return OA
def Precision(confusionMatrix):
# Return precision for each class
precision = np.diag(confusionMatrix) / confusionMatrix.sum(axis = 0)
return precision
def Recall(confusionMatrix):
# Return recall for each class
recall = np.diag(confusionMatrix) / confusionMatrix.sum(axis = 1)
return recall
def F1Score(confusionMatrix):
precision = np.diag(confusionMatrix) / confusionMatrix.sum(axis = 0)
recall = np.diag(confusionMatrix) / confusionMatrix.sum(axis = 1)
f1score = 2 * precision * recall / (precision + recall)
return f1score
def IntersectionOverUnion(confusionMatrix):
# Return IoU
intersection = np.diag(confusionMatrix)
union = np.sum(confusionMatrix, axis = 1) + np.sum(confusionMatrix, axis = 0) - np.diag(confusionMatrix)
IoU = intersection / union
return IoU
def MeanIntersectionOverUnion(confusionMatrix):
# Return mIoU
intersection = np.diag(confusionMatrix)
union = np.sum(confusionMatrix, axis = 1) + np.sum(confusionMatrix, axis = 0) - np.diag(confusionMatrix)
IoU = intersection / union
mIoU = np.nanmean(IoU)
return mIoU
def Frequency_Weighted_Intersection_over_Union(confusionMatrix):
# Return FWIoU
freq = np.sum(confusionMatrix, axis=1) / np.sum(confusionMatrix)
iu = np.diag(confusionMatrix) / (
np.sum(confusionMatrix, axis = 1) +
np.sum(confusionMatrix, axis = 0) -
np.diag(confusionMatrix))
FWIoU = (freq[freq > 0] * iu[freq > 0]).sum()
return FWIoU
# Predict image PATH
PredictPath = r"evaluation\predict"
# Predict label PATH
LabelPath = r"evaluation\label"
# Number of class
classNum = 5
# Get category color dictionary
colorDict_BGR, colorDict_GRAY = color_dict(LabelPath, classNum)
# Read all the images in the folder
labelList = os.listdir(LabelPath)
PredictList = os.listdir(PredictPath)
# To read the shape of the image
Label0 = cv2.imread(LabelPath + "//" + labelList[0], 0)
# Number of images
label_num = len(labelList)
# Put all images in an array
label_all = np.zeros((label_num, ) + Label0.shape, np.uint8)
predict_all = np.zeros((label_num, ) + Label0.shape, np.uint8)
for i in range(label_num):
Label = cv2.imread(LabelPath + "//" + labelList[i])
Label = cv2.cvtColor(Label, cv2.COLOR_BGR2GRAY)
label_all[i] = Label
Predict = cv2.imread(PredictPath + "//" + PredictList[i])
Predict = cv2.cvtColor(Predict, cv2.COLOR_BGR2GRAY)
predict_all[i] = Predict
for i in range(colorDict_GRAY.shape[0]):
label_all[label_all == colorDict_GRAY[i][0]] = i
predict_all[predict_all == colorDict_GRAY[i][0]] = i
# flatten label
label_all = label_all.flatten()
predict_all = predict_all.flatten()
# Calculate confusion matrix and various precision parameters
confusionMatrix = ConfusionMatrix(classNum, predict_all, label_all)
precision = Precision(confusionMatrix)
recall = Recall(confusionMatrix)
OA = OverallAccuracy(confusionMatrix)
IoU = IntersectionOverUnion(confusionMatrix)
FWIOU = Frequency_Weighted_Intersection_over_Union(confusionMatrix)
mIOU = MeanIntersectionOverUnion(confusionMatrix)
f1ccore = F1Score(confusionMatrix)
print("植被 建筑 裸土 水体 背景")
print("Vegetation building soil water background")
print("ConfusionMatrix:")
print(confusionMatrix)
print("Precision:")
print(precision)
print("Recall:")
print(recall)
print("F1-Score:")
print(f1ccore)
print("OverallAccuracy:")
print(OA)
print("IoU:")
print(IoU)
print("mIoU:")
print(mIOU)
print("FWIoU:")
print(FWIOU)
###Output
植被 建筑 裸土 水体 背景
Vegetation building soil water background
ConfusionMatrix:
[[1179532 1407 0 1625 44805]
[ 157 37725 0 0 19925]
[ 123 10556 0 3 2887]
[ 2323 1313 0 111430 14793]
[ 103732 22175 0 4705 690784]]
Precision:
[0.91730482 0.51553788 nan 0.9462225 0.89341614]
Recall:
[0.96102476 0.65260263 0. 0.85808454 0.84098778]
F1-Score:
[0.93865598 0.57602895 nan 0.90000081 0.86640955]
OverallAccuracy:
0.8975426666666667
IoU:
[0.88440314 0.40452294 0. 0.81818315 0.76430561]
mIoU:
0.5742829679503395
FWIoU:
0.8190752312638379
###Markdown
Segmentation metrics evaluation Per Pixel Accuracy Per Class Accuracy Intersection over Union
###Code
import os
import numpy as np
from skimage.io import imread
import matplotlib.pyplot as plt
from FCN.citydataset import classes_city
maps_classes = np.array([
[255,255,251],
[203,222,174],
[171,208,251],
[231,229,224],
[243,239,235],
[255,150,63]
])
facades_classes = np.array([
[255,154,47],
[194,0,47],
[0,56,248],
[252,766,30],
[0,247,238],
[0,129,249],
[101,255,160],
[197,2533,90],
[0,24,215]
])
N = lambda i,j,truth, pred: np.sum(np.multiply(truth==i, pred==j))
T = lambda i, truth: np.sum(truth==i)
def PerPixelAccuracy(truth, pred, classes=maps_classes):
num = np.sum([N(i,i,truth,pred) for i in range(len(classes))])
den = np.sum([T(i,truth) for i in range(len(classes))])
return num*1.0/den
def MeanAccuracy(truth, pred, classes=maps_classes):
return np.sum([N(i,i,truth,pred)*1.0/(T(i,truth)+1e-15) for i in range(len(classes))])*1.0/(len(classes))
def MeanIU(truth, pred, classes=maps_classes):
coef = np.sum([N(i,i,truth,pred)\
/(1e-15+T(i,truth)-N(i,i,truth, pred)\
+np.sum([N(j,i,truth,pred) for j in range(len(classes))]))\
for i in range(len(classes))])
return coef*1.0/(len(classes))
def classif_score(idx,folder_model,fake_b,real_b, classes = maps_classes):
F = lambda im: os.path.join(folder_model,fake_b[im])
R = lambda im: os.path.join(folder_model,fake_b[im].replace('fake','real'))
f = imread(F(idx)).reshape(-1,3)
r = imread(R(idx)).reshape(-1,3)
def find_cluster(vec, classes=classes):
rscores = np.zeros((256*256,len(classes)))
for i in range(len(classes)):
rscores[:,i] = np.linalg.norm(vec-np.repeat(classes[i].reshape(1,3),256*256,axis = 0), axis = 1)
vc = np.argmin(rscores, axis = 1)
return vc
pred = find_cluster(f)
truth = find_cluster(r)
ppa = PerPixelAccuracy(truth=truth,pred=pred, classes=classes),
ma = MeanAccuracy(truth=truth,pred=pred, classes=classes),
miu = MeanIU(truth=truth,pred=pred, classes=classes),
return np.array([ppa, ma, miu]).reshape(1,3)
def Eval(model, classes):
folder_model = r"results/{}/test_latest/images/".format(model)
fake_b = sorted(list(filter(lambda x: 'fake_B' in x, os.listdir(folder_model))))
real_b = sorted(list(filter(lambda x: 'real_B' in x, os.listdir(folder_model))))
results = np.vstack((classif_score(idx,folder_model,fake_b,real_b,classes=classes) for idx in range(len(fake_b))))
metrics = ['PerPixelAccuracy', 'MeanAccuracy','MeanIU']
return dict(zip(metrics,results.mean(axis = 0))), dict(zip(metrics,results.std(axis = 0)))
###Output
_____no_output_____
###Markdown
Maps Aerial Photo
###Code
Eval("maps_cyclegan", classes = maps_classes)
Eval("maps_cyclegan_sn", classes = maps_classes)
Eval("maps_cyclegan_wgangp", classes = maps_classes)
###Output
_____no_output_____
###Markdown
Façades
###Code
Eval("facades_cyclegan", classes = facades_classes)
Eval("facades_cyclegan_sn", classes = facades_classes)
Eval("facades_cyclegan_wgan_gp", classes = facades_classes)
###Output
_____no_output_____
###Markdown
CityScapes
###Code
file = r"datasets/cityscapes/testB/100_B.jpg"
Eval("cityscapes_cyclegan", classes = classes_city)
Eval("cityscapes_cyclegan_sn", classes = classes_city)
Eval("cityscapes_cyclegan_wgan_gp", classes = classes_city)
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Download notebook CLIP and fine-tuned CLIP evaluationThis notebook presents the code used for evaluating CLIP and the fine-tuned CLIP performance on mapping a pictogram to a WordNet synset. Requirements
###Code
!git clone -b finetuning https://github.com/jayralencar/train-CLIP
import nltk
nltk.download('wordnet')
from nltk.corpus import wordnet as wn
!pip install pytorch_lightning transformers
!pip install git+https://github.com/openai/CLIP.git
pip install 'git+https://github.com/katsura-jp/pytorch-cosine-annealing-with-warmup'
!mv train-CLIP/ train_clip
###Output
_____no_output_____
###Markdown
Download testing datasetWe used the [Mulberry Symbols](https://mulberrysymbols.org/) as dataset for evaluation. The symbols consist of a set of 3,436 graphic images followed by labels and designed to communication use, especially for adults. We manually annotated 900 pictograms related to nouns, verbs, adjectives, and adverbs, using WordNet synsets. For this, we considered the labels provided to each pictogram and searched for their more appropriated synset in WordNet considering the action or entity present in the pictogram picture. This way, our test dataset consist of a set of (pictogram, synset) pairs. As each synset has a glossary definition, we can expand the dataset to (pictogram, description) pairs.
###Code
!gdown https://drive.google.com/uc?id=1AjxMS3IWGk2s0gahCU988o0Pdk2nR4fq
!unzip mulberry-symbols-PNG.zip
!gdown https://drive.google.com/uc?id=1fk9Ws21n2ClVB_5Riz-s8syR0PUTw7_C
###Output
Downloading...
From: https://drive.google.com/uc?id=1fk9Ws21n2ClVB_5Riz-s8syR0PUTw7_C
To: /content/ambiguous.json
0% 0.00/1.47M [00:00<?, ?B/s]
100% 1.47M/1.47M [00:00<00:00, 22.8MB/s]
###Markdown
Prepare items
###Code
import json
pictograms = json.load(open('./ambiguous.json'))
pictograms = [p for p in pictograms if 'synset' in p]
len(pictograms)
noun_pictograms = [p for p in pictograms if wn.synset(p['synset']).pos() == 'n']
verb_pictograms = [p for p in pictograms if wn.synset(p['synset']).pos() == 'v']
adverb_pictograms = [p for p in pictograms if wn.synset(p['synset']).pos() == 'r']
adjective_pictograms = [p for p in pictograms if wn.synset(p['synset']).pos() in ['a','s']]
len(noun_pictograms), len(verb_pictograms), len(adverb_pictograms), len(adjective_pictograms)
###Output
_____no_output_____
###Markdown
Evaluating CLIP
###Code
import clip
import torch
from PIL import Image
from tqdm import tqdm
from sklearn.metrics import accuracy_score
device = "cuda" if torch.cuda.is_available() else "cpu"
clip.available_models()
for model_name in clip.available_models():
print("Model name",model_name)
print("> Downloading and loading model")
model, preprocess = clip.load(model_name, device=device)
true_y = []
predicted_y = []
print('> inference')
picts = [noun_pictograms, verb_pictograms, adverb_pictograms, adjective_pictograms]
labels = ['Nouns','Verbs',"Adverbs",'Adjectives']
for i, label in enumerate(labels):
print(">>",label)
true_l = []
predicted_l = []
pbar = tqdm(picts[i])
for pictogram in pbar:
file_path = "./PNG/{0}".format(pictogram['filename'])
pil_image = Image.open(file_path)
image = preprocess(pil_image).unsqueeze(0).to(device)
pict_s = wn.synset(pictogram['synset'])
# definitions = [s['definition'] for s in pictogram['synsets']]
# synsets = [s['synset'] for s in pictogram['synsets']]
synsets = wn.synsets(pictogram['word'], pict_s.pos())
definitions = []
for ss in synsets:
# print(sss)
# ss = wn.synset(sss)
defs = [ss.definition()]
for e in ss.examples():
defs.append(e)
# for l in ss.lemmas():
# defs.append(l.name())
definitions.append(". ".join(defs))
text = clip.tokenize(definitions, truncate=True).to(device)
with torch.no_grad():
logits_per_image, logits_per_text = model(image, text)
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
better = probs.argmax()
predicted_y.append(synsets[better].name())
predicted_l.append(synsets[better].name())
true_y.append(pictogram['synset'])
true_l.append(pictogram['synset'])
acc = accuracy_score(true_l,predicted_l)
pbar.set_postfix({'acc': acc})
print("Accuracy ", label,":",accuracy_score(true_l,predicted_l))
print("Accuracy:",accuracy_score(true_y,predicted_y))
###Output
Model name RN50
> Downloading and loading model
> inference
>> Nouns
###Markdown
Evaluating Fine-tuned
###Code
!gdown https://drive.google.com/uc?id=1iNAamnjAmiggV8nXjGr40AxzjApWQZ9E
!unzip all.zip
import torch
import torch.nn as nn
import torch.nn.functional as F
import pytorch_lightning as pl
import numpy as np
import math
import yaml
import copy
from cosine_annealing_warmup import CosineAnnealingWarmupRestarts
import clip
class CLIPFinetuningWrapper(pl.LightningModule):
# Module based on
def __init__(self, model_name,batch_size=1, learning_rate=3e-3,):
super().__init__()
#hparams
self.learning_rate = learning_rate
self.batch_size = batch_size
# Load CLIP
# print("MODEL INSIDE", model_name)
self.model, self.preprocess = clip.load(model_name, device=self.device)
# Prepare for loss calculating
self.loss_img = nn.CrossEntropyLoss()
self.loss_txt = nn.CrossEntropyLoss()
def forward(self, images, texts):
return self.model(images, texts)
def training_step(self, train_batch, idx):
image, text = train_batch
n = math.ceil(len(image) // self.batch_size)
image_mbs = torch.chunk(image, n)
ims = [F.normalize(self.model.encode_image(im), dim=1) for im in image_mbs]
# gather from all GPUs
ims = self.all_gather(torch.cat(ims))
logits_per_image, logits_per_text = self.forward(image,text)
ground_truth = torch.arange(self.batch_size,dtype=torch.long,device=self.device)
loss = (self.loss_img(logits_per_image,ground_truth) + self.loss_txt(logits_per_text,ground_truth))/2
self.log("train_loss", loss, on_epoch=True, prog_bar=True,)
acc_i = (torch.argmax(logits_per_image, 0) == ground_truth).sum()
acc_t = (torch.argmax(logits_per_image.t(), 0) == ground_truth).sum()
acc = (acc_i + acc_t) / 2 / len(image)
self.log("train_acc", acc, on_epoch=True, prog_bar=True,)
# acc_i tensor(79, device='cuda:0')
# acc_t tensor(62, device='cuda:0')
# acc tensor(0.0043, device='cuda:0')
return loss
def validation_step(self, batch, idx):
image, text = batch
# print(self.batch_size)
# /" ".join(model_name.split("/"))
if MODE=='tune':
n = len(image)
else:
n = math.ceil(len(image) // self.batch_size)
#
# print(n)
image_mbs = torch.chunk(image, n)
ims = [F.normalize(self.model.encode_image(im), dim=1) for im in image_mbs]
# gather from all GPUs
ims = self.all_gather(torch.cat(ims))
image_logits, text_logits = self.forward(image, text)
ground_truth = torch.arange(len(image_logits)).to('cuda:0')
loss = (F.cross_entropy(image_logits, ground_truth) + F.cross_entropy(text_logits, ground_truth)).div(2)
self.log('val_loss', loss, on_epoch=True, prog_bar=True)
acc_i = (torch.argmax(image_logits, 0) == ground_truth).sum()
acc_t = (torch.argmax(image_logits.t(), 0) == ground_truth).sum()
acc = (acc_i + acc_t) / 2 / len(image)
self.log("val_acc", acc, on_epoch=True, prog_bar=True,)
def test_step(self, batch, idx):
pass
def configure_optimizers(self):
lr = self.learning_rate
optimizer = torch.optim.SGD(
self.parameters(),
lr=lr,
momentum=0.9
)
# print(self.num_training_steps)
# print(int(self.num_training_steps*0.2))
# Source: https://github.com/openai/CLIP/issues/107
# Use pip install 'git+https://github.com/katsura-jp/pytorch-cosine-annealing-with-warmup'
lr_scheduler = CosineAnnealingWarmupRestarts(
optimizer,
first_cycle_steps=50,
cycle_mult=1.0,
max_lr=lr,
min_lr=0,
warmup_steps=int(50*0.1)
)
return {'optimizer': optimizer, 'lr_scheduler': lr_scheduler}
# Sourced from https://github.com/PyTorchLightning/pytorch-lightning/issues/5449
@property
def num_training_steps(self) -> int:
"""Total training steps inferred from datamodule and devices."""
dataset = self.train_dataloader()
if self.trainer.max_steps:
return self.trainer.max_steps
dataset_size = len(dataset.dataset)
# print("DS:", dataset_size)
num_devices = max(1, self.trainer.num_gpus, self.trainer.num_processes)
if self.trainer.tpu_cores:
num_devices = max(num_devices, self.trainer.tpu_cores)
effective_batch_size = dataset.batch_size * self.trainer.accumulate_grad_batches * num_devices
# print("effective_batch_size",effective_batch_size)
# a = (dataset_size // effective_batch_size) * self.trainer.max_epochs
a = dataset_size // effective_batch_size
# print('a',a)
return a
from torchvision import transforms as T
import clip
def fix_img(img):
return img.convert('RGB') if img.mode != 'RGB' else img
image_transform = T.Compose([
T.Lambda(fix_img),
T.RandomResizedCrop(224,
scale=(0.75, 1.),
ratio=(1., 1.)),
T.ToTensor(),
T.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711))
])
import torch
from PIL import Image
from sklearn.metrics import accuracy_score
from tqdm import tqdm
clip.available_models()
for model_name in clip.available_models():
# for model_name in ["ViT-B/32"]:
print("model", model_name)
# model_path = "./final.ckpt"
model_path = "./all/{0}.ckpt".format(" ".join(model_name.split("/")))
model = CLIPFinetuningWrapper.load_from_checkpoint(model_path, model_name="ViT-B/32")
true_y = []
predicted_y = []
picts = [noun_pictograms, verb_pictograms, adverb_pictograms, adjective_pictograms]
labels = ['Nouns','Verbs',"Adverbs",'Adjectives']
for i, label in enumerate(labels):
print(">>",label)
true_l = []
predicted_l = []
pbar = tqdm(picts[i])
for pictogram in pbar:
# pictogram = pictograms[i]
file_path = "./PNG/{0}".format(pictogram['filename'])
with torch.no_grad():
img_emb = image_transform(Image.open(file_path)).unsqueeze(0).to('cpu')
# img_enc = model.model.encode_image(img_emb)
# print(img_enc.size())
# texts = [s['definition'] for s in pictogram['synsets']]
pict_s = wn.synset(pictogram['synset'])
# synsets = [s['synset'] for s in pictogram['synsets']]
synsets = wn.synsets(pictogram['word'], pict_s.pos())
texts = []
for ss in synsets:
# ss = wn.synset(s)
# name,_,_ = ss.name().split(".")
defs = [ss.definition()]
# defs =[ss.definition()]
# for h in ss.hypernyms():
# defs.append(h.definition())
# for e in ss.examples():
# defs.append(e)
# for l in ss.lemmas():
# defs.append(l.name())
texts.append(", ".join(defs))
text_emb= clip.tokenize(texts, truncate=True).to('cpu')
# text_emb = tokenizer(texts,padding='max_length', truncation=True, return_tensors="pt",max_length=77)['input_ids'].to('cpu')
# text_emb = tokenizer(texts,padding='max_length', truncation=True, return_tensors="pt",max_length=77)
# print(text_emb)/
# text_enc = model.model.encode_text(text_emb)
# print(text_enc.size())
# print(synsets,pictogram['word'],pictogram['synset'])
logits_per_image, logits_per_text = model(img_emb,text_emb)
# print(logits_per_image.size())
# print(logits_per_text.size())
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
# probs = logits_per_image.cpu().numpy()
better = probs.argmax()
predicted_y.append(synsets[better].name())
predicted_l.append(synsets[better].name())
true_y.append(pictogram['synset'])
true_l.append(pictogram['synset'])
acc = accuracy_score(true_l,predicted_l)
pbar.set_postfix({'acc': acc})
# print("Accuracy ", label,":",accuracy_score(true_l,predicted_l))
print("Accuracy:", accuracy_score(true_y,predicted_y))
print("")
# text = clip.tokenize(definitions, truncate=True).to(device)
###Output
model RN50
>> Nouns
###Markdown
definition = 48%name ; definition =
###Code
for i, true in enumerate(true_y):
if true != predicted_y[i]:
s_true = wn.synset(true)
print(adjective_pictograms[i]['filename'])
print("TRUE: ",true, s_true.definition())
s_predicted = wn.synset(predicted_y[i])
print("PRED: ",predicted_y[i], s_predicted.definition())
print()
###Output
hot.png
TRUE: hot.a.01 used of physical heat; having a high or higher than desirable temperature or giving off heat or feeling or causing a sensation of heat or burning
PRED: hot.a.03 extended meanings; especially of psychological heat; marked by intensity or vehemence especially of passion or enthusiasm
whole.png
TRUE: whole.a.01 including all components without exception; being one unit or constituting the full amount or extent or duration; complete
PRED: solid.s.15 acting together as a single undiversified whole
long.png
TRUE: long.a.02 primarily spatial sense; of relatively great or greater than average spatial extension or extension as specified
PRED: long.a.01 primarily temporal sense; being or indicating a relatively great or greater than average duration or passage of time or a duration as specified
strong.png
TRUE: strong.a.01 having strength or power greater than average or expected
PRED: potent.s.02 having or wielding force or authority
full.png
TRUE: full.a.01 containing as much or as many as is possible or normal
PRED: broad.s.05 being at a peak or culminating point
ready.png
TRUE: ready.a.01 completely prepared or in condition for immediate action or use or progress
PRED: quick.s.04 apprehending and responding with speed and sensitivity
soft.png
TRUE: soft.a.01 yielding readily to pressure or weight
PRED: cushy.s.01 not burdensome or demanding; borne or done easily and without hardship
salty.png
TRUE: salty.a.02 containing or filled with salt
PRED: piquant.s.02 engagingly stimulating or provocative
little.png
TRUE: small.a.01 limited or below average in number or quantity or magnitude or extent
PRED: little.a.02 (quantifier used with mass nouns) small in quantity or degree; not much or almost none or (with `a') at least some
thin.png
TRUE: thin.a.02 lacking excess flesh
PRED: slender.s.02 very narrow
light.png
TRUE: light.a.01 of comparatively little physical weight or density
PRED: light.a.14 (physics, chemistry) not having atomic weight greater than average
dry.png
TRUE: dry.a.01 free from liquid or moisture; lacking natural or normal moisture or depleted of water; or no longer wet
PRED: dry.s.02 humorously sarcastic or mocking
wet.png
TRUE: wet.a.01 covered or soaked with a liquid such as water
PRED: wet.a.03 supporting or permitting the legal production and sale of alcoholic beverages
absent.png
TRUE: absent.a.01 not being in a specified place
PRED: lacking.s.02 nonexistent
straight.png
TRUE: straight.s.01 successive (without a break)
PRED: straight.a.08 free from curves or angles
magnetic.png
TRUE: magnetic.a.01 of or relating to or caused by magnetism
PRED: magnetic.a.02 having the properties of a magnet; i.e. of attracting iron or steel
flat.png
TRUE: flat.s.01 having a surface without slope, tilt in which no part is higher or lower than another
PRED: flat.s.12 horizontally level
Christian.png
TRUE: christian.a.01 relating to or characteristic of Christianity
PRED: christian.a.02 following the teachings or manifesting the qualities or spirit of Jesus Christ
sour.png
TRUE: sour.a.02 having a sharp biting taste
PRED: dark.s.06 showing a brooding ill humor
spicy.png
TRUE: piquant.s.01 having an agreeably pungent taste
PRED: hot.s.09 producing a burning sensation on the taste nerves
dirty.png
TRUE: dirty.a.01 soiled or likely to soil with dirt or grime
PRED: dirty.s.09 expressing or revealing hostility or dislike
fuzzy.png
TRUE: fuzzy.s.03 confused and not coherent; not clearly thought out
PRED: fuzzed.s.01 covering with fine light hairs
sharp.png
TRUE: sharp.a.09 having or made by a thin edge or sharp point; suitable for cutting or piercing
PRED: crisp.s.01 (of something seen or heard) clearly defined
thick.png
TRUE: thick.a.01 not thin; of a specific thickness or of relatively great extent from one surface to the opposite usually in the smallest of the three solid dimensions
PRED: thick.s.07 (of darkness) very intense
closed.png
TRUE: shut.a.01 not open
PRED: closed.a.02 (set theory) of an interval that contains both its endpoints
out.png
TRUE: out.s.08 outside or external
PRED: out.s.04 out of power; especially having been unsuccessful in an election
fat.png
TRUE: fat.a.01 having an (over)abundance of flesh
PRED: fat.s.02 having a relatively large diameter
same.png
TRUE: same.a.01 same in identity
PRED: like.a.02 equal in amount or value
broken.png
TRUE: broken.a.01 physically and forcibly separated into pieces or cracked or split
PRED: broken.s.10 destroyed financially
###Markdown
Overall Score
###Code
from sdv.evaluation import evaluate
evaluate(synthetic_data, real_data)
evaluate(synthetic_data, real_data, metrics = ['CSTest','KSTest'])
###Output
_____no_output_____
###Markdown
Time Series Metrics
###Code
from sdv.metrics.demos import load_timeseries_demo
real_data, synthetic_data, metadata = load_timeseries_demo()
real_data.head(2)
synthetic_data.head(2)
metadata
###Output
_____no_output_____
###Markdown
Time Series Metrics: Detection Metrics
###Code
from sdv.metrics.timeseries import LSTMDetection, TSFCDetection
# use LSTM classifier to perform detection
LSTMDetection.compute(real_data, synthetic_data, metadata)
# use TimeSeriesForestClassifier using sktime
TSFCDetection.compute(real_data, synthetic_data, metadata)
###Output
_____no_output_____
###Markdown
Time Series Metrics: Machine Learning Efficacy Metrics
###Code
from sdv.metrics.timeseries import TSFClassifierEfficacy
TSFClassifierEfficacy.compute(real_data, synthetic_data, metadata, target = 'region')
###Output
_____no_output_____
###Markdown
Define Metrics
###Code
originalPath = "style-transformer/outputs/soph_1/model_iteration_lr_0.0001/"
higherLR = "style-transformer/outputs/soph_1/model_iteration_lr_0.001/"
soph2 = "style-transformer/outputs/soph_2/"
soph3 = "style-transformer/outputs/soph_3/"
sophTagged = "style-transformer/outputs/soph_tagged/"
sophTaggedNp = "style-transformer/outputs/soph_tagged_np/"
def process(sent):
sent = sent.strip().replace('<pad>', '').strip()
return sent
def readNaiveTest(runNum):
path = f"style-transformer/data/soph_{runNum}/test.neg"
with open(path) as f:
naive = f.readlines()
return list(map(process, naive))
def load_transformer(path):
with open(path + "gold_text.txt") as f:
gold = f.readlines()
with open(path + "rev_output_0.txt") as f:
rev0 = f.readlines()
with open(path + "raw_output_0.txt") as f:
raw0 = f.readlines()
with open(path + "rev_output_1.txt") as f:
rev1 = f.readlines()
with open(path + "raw_output_1.txt") as f:
raw1 = f.readlines()
gold = list(map(process, gold))
rev0 = list(map(process, rev0))
raw0 = list(map(process, raw0))
return {0: (gold, rev0, raw0), 1:(gold, rev1, raw1)}
###Output
_____no_output_____
###Markdown
BLEU
###Code
def bleu_sent(originText, transferredText):
texts_origin = [
word_tokenize(text.lower().strip())
for text in originText
]
text_transfered = word_tokenize(transferredText.lower().strip())
cc = SmoothingFunction()
return sentence_bleu(texts_origin, text_transfered, smoothing_function=cc.method3)
def bleu_avg(originText, transferredText):
sum = 0
n = len(originText)
for x, y in zip(originText, transferredText):
sum += bleu_sent([x], y)
return sum / n
###Output
_____no_output_____
###Markdown
KenLM LMs add probability to each token sequence to indicate how likely it is for the sequence to occur in real text. Train LM on the target language, and the model estimates the probability of seeing a given sentence in the target text using Markov chains.In information theory, perplexity is a measurement of how well a probability distribution or probability model predicts a sample. It may be used to compare probability models. A low perplexity indicates the probability distribution is good at predicting the sample. The perplexity(sometimes called PP for short) of a language model on a test set is the inverse probability of the test set, normalized by the numberof words. https://lagunita.stanford.edu/c4x/Engineering/CS-224N/asset/slp4.pdfPPLxdenotes theperplexity of sentences transferred from positive sentences evaluated by a language model trainedwith negative sentences and vice versa. https://arxiv.org/pdf/1805.11749.pdf
###Code
def load_kenlm():
global kenlm
import kenlm
def train_ngram_lm(kenlm_path, data_path, output_path, N, load=False):
if not load:
curdir = os.path.abspath(os.path.curdir)
command = "bin/lmplz -o "+str(N)+" <"+os.path.join(curdir, data_path) + \
" >"+os.path.join(curdir, output_path)
print(command)
os.system("cd "+os.path.join(kenlm_path, 'build')+" && "+command)
load_kenlm()
assert(output_path)
model = kenlm.Model(output_path)
return model
def SentencePplFrame(reference, transferred, klm):
ppl_dict = {}
for i in range(len(reference)):
ppl_dict[i] = {'ppl':(get_ppl(klm, [reference[i]]), get_ppl(klm, [transferred[i]])),
'sent1': reference[i],
'sent2': transferred[i]}
test_df = pd.DataFrame(ppl_dict).T
test_df['ppl1'] = test_df.ppl.apply(lambda x: x[0])
test_df['ppl2'] = test_df.ppl.apply(lambda x: x[1])
test_df = test_df.sort_values('ppl2')
cols = ['ppl1', 'ppl2', 'sent1', 'sent2']
return test_df[cols]
kenlm_model = train_ngram_lm(
'kenlm',
'data/processed/soph_train_tagged_nopunct.txt',
'klm_soph_tagged_np.arpa',
5,
load=False
)
sentence = gold[10]
# Show scores and n-gram matches
words = ['<s>'] + sentence.split() + ['</s>']
for i, (prob, length, oov) in enumerate(kenlm_model.full_scores(sentence)):
print('{0} {1}: {2}'.format(prob, length, ' '.join(words[i+2-length:i+2])))
if oov:
print('\t"{0}" is an OOV'.format(words[i+1]))
# Find out-of-vocabulary words
for w in words:
if not w in kenlm_model:
print('"{0}" is an OOV'.format(w))
def get_ppl(lm, sentences):
"""
Assume sentences is a list of strings (space delimited sentences)
"""
total_nll = 0
total_wc = 0
for sent in sentences:
words = sent.strip().split()
nll = np.sum([- math.log(math.pow(10.0, score)) for score, _, _ in lm.full_scores(sent, bos=True, eos=False)])
word_count = len(words)
total_wc += word_count
total_nll += nll
ppl = np.exp(total_nll / total_wc)
return ppl
###Output
_____no_output_____
###Markdown
Similarities - Jaccard, Cosine
###Code
def jaccard_sim(sent1, sent2):
a = set(sent1.split())
b = set(sent2.split())
c = a.intersection(b)
return float(len(c)) / (len(a) + len(b) - len(c))
def loadGloveModel(gloveFile):
with open(gloveFile, encoding="utf8" ) as f:
content = f.readlines()
model = {}
for line in content:
splitLine = line.split()
word = splitLine[0]
embedding = np.array([float(val) for val in splitLine[1:]])
model[word] = embedding
return model
def cosine_format(raw):
processed = re.sub("[^a-zA-Z]", " ", raw)
words = processed.lower().split()
stopword_set = set(stopwords.words("english"))
uniq_words = list(set([w for w in words if w not in stopword_set]))
return uniq_words
def cosine_words(word1, word2):
return (1 - scipy.spatial.distance.cosine(model[word1], model[word2]))
model = loadGloveModel(gloveFile)
def cosine_sent(sent1, sent2):
if not isinstance(sent1, list):
sent1 = cosine_format(sent1)
sent2 = cosine_format(sent2)
embs1 = np.mean([model[word] for word in sent1], axis=0)
embs2 = np.mean([model[word] for word in sent2], axis=0)
return(1 - scipy.spatial.distance.cosine(embs1, embs2))
def heat_matrix(sent1, sent2):
s1 = cosine_format(sent1)
s2 = cosine_format(sent2)
result_list = [[cosine_words(word1, word2) for word2 in s2] for word1 in s1]
result_df = pd.DataFrame(result_list)
result_df.columns = s2
result_df.index = s1
return result_df
def heat_map(s1, s2):
df = heat_matrix(s1, s2)
fig, ax = plt.subplots(figsize=(5,5))
ax_blue = sns.heatmap(df, cmap="YlGnBu")
print(cosine_sent(s1, s2))
return ax_blue
###Output
_____no_output_____
###Markdown
PINC https://github.com/cocoxu/Shakespeare/blob/master/python/PINC_sentence.py
###Code
def intersect(list1, list2) :
cnt1 = Counter()
cnt2 = Counter()
for tk1 in list1:
cnt1[tk1] += 1
for tk2 in list2:
cnt2[tk2] += 1
inter = cnt1 & cnt2
return len(list(inter.elements()))
def pinc(ssent, csent):
s1grams = ssent.split(" ")
c1grams = csent.split(" ")
s2grams = []
c2grams = []
s3grams = []
c3grams = []
s4grams = []
c4grams = []
for i in range(0, len(s1grams)-1) :
if i < len(s1grams) - 1:
s2gram = s1grams[i] + " " + s1grams[i+1]
s2grams.append(s2gram)
if i < len(s1grams)-2:
s3gram = s1grams[i] + " " + s1grams[i+1] + " " + s1grams[i+2]
s3grams.append(s3gram)
if i < len(s1grams)-3:
s4gram = s1grams[i] + " " + s1grams[i+1] + " " + s1grams[i+2] + " " + s1grams[i+3]
s4grams.append(s4gram)
for i in range(0, len(c1grams)-1) :
if i < len(c1grams) - 1:
c2gram = c1grams[i] + " " + c1grams[i+1]
c2grams.append(c2gram)
if i < len(c1grams)-2:
c3gram = c1grams[i] + " " + c1grams[i+1] + " " + c1grams[i+2]
c3grams.append(c3gram)
if i < len(c1grams)-3:
c4gram = c1grams[i] + " " + c1grams[i+1] + " " + c1grams[i+2] + " " + c1grams[i+3]
c4grams.append(c4gram)
score = intersect(s1grams, c1grams) / len(c1grams)
if len(c2grams) > 0:
score += intersect(s2grams, c2grams) / len(c2grams)
if len(c3grams) > 0:
score += intersect(s3grams, c3grams) / len(c3grams)
if len(c4grams) > 0:
score += intersect(s4grams, c4grams) / len(c4grams)
return 1 - score/4
def pinc_corpus(origText, transferText):
sentcount = len(origText)
pincscore = 0.0
for idx in range(len(origText)):
sline = origText[idx].strip()
cline = transferText[idx].strip()
sentscore = pinc(sline, cline)
pincscore += sentscore
pincscore = pincscore / sentcount * 100
return pincscore
###Output
_____no_output_____
###Markdown
Putting it all together
###Code
def sentenceMetrics(sent1, sent2, kenlm_model, output=False):
metrics = {}
metrics['bleu'] = bleu_sent(sent1, sent2)
metrics['cosine'] = cosine_sent(sent1, sent2)
metrics['jaccard'] = jaccard_sim(sent1, sent2)
metrics['pinc'] = pinc(sent1, sent2)
metrics['ppl'] = (get_ppl(kenlm_model, [sent1]), get_ppl(kenlm_model, [sent2]))
if output:
print(f"Orig: {sent1}")
print(f"New: {sent2}")
heat_map(sent1, sent2)
return metrics
def globalMetrics(origData, transferData, kenlm_model):
metrics = {}
metrics['bleu'] = bleu_avg(origData, transferData)
metrics['ppl'] = (get_ppl(kenlm_model, origData),
get_ppl(kenlm_model, transferData))
metrics['pinc'] = pinc_corpus(origData, transferData)
return metrics
###Output
_____no_output_____
###Markdown
Dataset Metrics
###Code
loaded_data = load_transformer(originalPath)
gold_orig, rev_orig, raw_orig = loaded_data[0]
loaded_data = load_transformer(higherLR)
gold_HLR, rev_HLR, raw_HLR = loaded_data[0]
loaded_data = load_transformer(soph2)
gold_soph2, rev_soph2, raw_soph2 = loaded_data[0]
loaded_data = load_transformer(sophTagged)
gold_soph_tag, rev_soph_tag, raw_soph_tag = loaded_data[0]
loaded_data = load_transformer(sophTaggedNp)
gold_soph_tag_np, rev_soph_tag_np, raw_soph_tag_np = loaded_data[0]
naive_1 = readNaiveTest(1)
naive_2 = readNaiveTest(2)
naive_3 = readNaiveTest(3)
naive_tag = readNaiveTest('tagged')
naive_tag_np = readNaiveTest('tagged_np')
kenlm_1 = train_ngram_lm('kenlm', 'data/processed/soph_train.txt', 'klm_soph_1.arpa', 5, load=True)
kenlm_2 = train_ngram_lm('kenlm', 'data/processed/soph_train_2.txt', 'klm_soph_2.arpa', 5, load=True)
kenlm_3 = train_ngram_lm('kenlm', 'data/processed/soph_train_3.txt', 'klm_soph_3.arpa', 5, load=True)
kenlm_tag = train_ngram_lm('kenlm', 'data/processed/soph_train_tagged.txt', 'klm_soph_tagged.arpa', 5, load=True)
kenlm_tag_np = train_ngram_lm('kenlm', 'data/processed/soph_train_tagged_nopunct.txt', 'klm_soph_tagged_np.arpa', 5, load=True)
globalMetrics(naive_2, rev_soph2, kenlm_2)
globalMetrics(naive_tag, rev_soph_tag, kenlm_tag)
globalMetrics(naive_tag_np, rev_soph_tag_np, kenlm_tag_np)
dftag = SentencePplFrame(naive_tag_np, rev_soph_tag_np, kenlm_tag_np)
pd.set_option('display.max_colwidth', -1)
dftag.sample(frac=1).head(100)
#https://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readability_tests
list(map(textstat.flesch_kincaid_grade, rev_soph_tag_np))
list(map(textstat.flesch_reading_ease, rev_soph_tag_np))
###Output
_____no_output_____
###Markdown
Evaluating Different Optimizations of NetworksThis notebook is used to compared different optimizations of networksLoad different networks and data and run evaluate function
###Code
import time
import numpy as np
import tensorflow as tf
@tf.function
def run(data, model):
return model(data)
def evaluate(model, type, data):
x_test, y_test = data
prediction_digits = []
if type.lower() == 'tflite':
model.allocate_tensors()
input_index = model.get_input_details()[0]["index"]
output_index = model.get_output_details()[0]["index"]
t1 = time.time()
for test_image in x_test:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
model.set_tensor(input_index, test_image)
# Run inference.
model.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = model.get_tensor(output_index)
# digit = np.argmax(output()[0])
# prediction_digits.append(digit)
t2 = time.time()-t1
else:
y_test = tf.keras.utils.to_categorical(y_test, 10)
t1 = time.time()
for test_image in x_test:
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
# output = model(test_image)
run(test_image, model)
t2 = time.time()-t1
print(f'{type} model time: {t2}')
mnist = tf.keras.datasets.mnist
num_classes = 10
input_shape = (28, 28, 1)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Scale images to the [0, 1] range
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# Make sure images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
# original = tf.keras.models.load_model('functional_split_net')
# evaluate(original, 'normal', (x_test,y_test))
# del original
folded = tf.keras.models.load_model('Networks/functional_split_folded')
evaluate(folded, 'folded', (x_test, y_test))
del folded
tflite = interpreter = tf.lite.Interpreter(model_path='Networks/functional_base.tflite')
evaluate(tflite, 'tflite', (x_test, y_test))
###Output
_____no_output_____
###Markdown
Content* To-Do List* Importing Libraries* Preconfiguration* Generating Knapsack Problems* Running Tests* Saving Results* Loading Results* Visualization To-Do List* Add E[1, 1] Evolution strategy Algorithm* Optimize MixtureModel sampling Importing Libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
from time import time
from main import main
from evolution.chromosome import *
from utils.data_manipulators import *
from problems.knapsack_generator import knapsack_generator
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Preconfiguration
###Code
def get_fitness(results):
fitnesses = np.zeros_like(results)
for i, rep in enumerate(results):
for j, gen in enumerate(rep):
if gen.any() is not None:
fitnesses[i, j, :] = Chromosome.fitness_to_numpy(gen)
return fitnesses
class args:
tsamples = 10
src_version = 'v1'
stop_condition = True
reps = 30
transfer = True
delta = 2
buildmodel = False
s1_psize = 50
s2_psize = 1
sample_size = 50
sub_sample_size = 50
version = 'v2'
mutation_strength = 1
injection_type = 'full'
to_repititon_num = 4
selection_version = 'v1'
c = np.sqrt(1.5)
np.sqrt(1.5)
###Output
_____no_output_____
###Markdown
Generating Knapsack Problems
###Code
# for i in range(320):
# for type_wp in ['uc', 'wc', 'sc']:
# for type_c in ['rk', 'ak']:
# knapsack_generator(n=1000, v=10, r=5, type_wp=type_wp, type_c=type_c, addr="problems/knapsack", add_name=str(i))
###Output
_____no_output_____
###Markdown
Running Tests
###Code
now = time()
data = results_v2_selv1_tor4 = main(args)
end = time()
print("duration: ", str((end - now)/60))
###Output
_____no_output_____
###Markdown
Saving Results
###Code
Tools.save_to_file('data/results_v2_selv1_tor1',results_v2_selv1_tor1)
###Output
_____no_output_____
###Markdown
Loading Results
###Code
results = Tools.load_from_file('data/result_v1')
to_results = Tools.load_from_file('data/to_results')
ea_results = Tools.load_from_file('data/ea_results')
# results_2d_10s = Tools.load_from_file('data/results_v1_2d_10s')
# results_2d_20s = Tools.load_from_file('data/results_v1_2d_20s')
# results_2d_30s = Tools.load_from_file('data/results_v1_2d_30s')
# results_2d_40s = Tools.load_from_file('data/results_v1_2d_40s')
# results_10s = Tools.load_from_file('data/results_v1_10s')
# results_20s = Tools.load_from_file('data/results_v1_20s')
# results_30s = Tools.load_from_file('data/results_v1_30s')
# results_40s = Tools.load_from_file('data/results_v1_40s')
results_to_sv2 = Tools.load_from_file('data/results_to_sv2')
results_sv2_v2 = Tools.load_from_file('data/results_sv2_v2(1 + 1)')
results_v2_sv2_full = Tools.load_from_file('data/results_v2_sv2_full')
results_v2_selv2_c2 = Tools.load_from_file('data/results_v2_selv2_c2')
results_v2_selv2_c122 = Tools.load_from_file('data/results_v2_selv2_c122')
results_v2_selv1_tor5 = Tools.load_from_file('data/results_v2_selv1_tor5')
results_v2_selv1_tor4 = Tools.load_from_file('data/results_v2_selv1_tor4')
results_v2_selv1_tor3 = Tools.load_from_file('data/results_v2_selv1_tor3')
results_v2_selv1_tor2 = Tools.load_from_file('data/results_v2_selv1_tor2')
results_v2_selv1_tor1 = Tools.load_from_file('data/results_v2_selv1_tor1')
fitnesses = get_fitness(results[0])
# fitnesses_d2 = get_fitness(results_d2[0])
# fitnesses_2d_10s = get_fitness(results_2d_10s[0])
# fitnesses_2d_20s = get_fitness(results_2d_20s[0])
# fitnesses_2d_30s = get_fitness(results_2d_30s[0])
# fitnesses_2d_40s = get_fitness(results_2d_40s[0])
# fitnesses_10s = get_fitness(results_10s[0])
# fitnesses_20s = get_fitness(results_20s[0])
# fitnesses_30s = get_fitness(results_30s[0])
# fitnesses_40s = get_fitness(results_40s[0])
# fitnesses_v2 = (get_fitness(results_sv2[0]))
fitnesses_sv2_v2 = get_fitness(results_sv2_v2[0])
fitnesses_v2_sv2_full = get_fitness(results_v2_sv2_full[0])
# fitnesses_tor10 = get_fitness(results_tor10[0])
fitnesses_tor10_v2_selv2_c2 = get_fitness(results_v2_selv2_c2[0])
fitnesses_v2_selv2_c122 = get_fitness(results_v2_selv2_c122[0])
fitnesses_v2_selv1_tor5 = get_fitness(results_v2_selv1_tor5[0])
fitnesses_v2_selv1_tor4 = get_fitness(results_v2_selv1_tor4[0])
fitnesses_v2_selv1_tor3 = get_fitness(results_v2_selv1_tor3[0])
fitnesses_v2_selv1_tor2 = get_fitness(results_v2_selv1_tor2[0])
fitnesses_v2_selv1_tor1 = get_fitness(results_v2_selv1_tor1[0])
# np.zeros_like(fitness_s1)
# for i, rep in enumerate(fitness_s1):
# for j, gen in enumerate(rep):
# if gen.any() is not None:
# fitnesses[i, j, :] = Chromosome.fitness_to_numpy(gen)
###Output
_____no_output_____
###Markdown
Visualization
###Code
gen_d3 = np.append([0], np.sort(np.append(np.arange(1,100, 3), np.arange(2,100, 3))))
gen_d2 = np.append([0], np.arange(1,100, 2))
fitnesses_10s.shape
fitnesses_d2.shape
gen_d2.shape
# plt.plot(gen_d2, np.mean(np.mean(fitnesses_d2, axis=0),axis=1), 'b', label='our idea (delta=2 & 50 samples)')
# plt.plot(gen_d2, np.mean(np.mean(fitnesses_v2, axis=0),axis=1), '#aaa000', label='our idea (delta=2 & version 2)')
# plt.plot(np.mean(np.mean(results_to_sv2[0], axis=0),axis=1), '#0aaa00', label='transfer idea (delta=2 & dataset version 2)')
# plt.plot(gen_d2, np.mean(np.mean(fitnesses_sv2_v2, axis=0),axis=1), '#0a0a00', label='our idea (delta=2 & dataset version 2)')
plt.plot(gen_d2, np.mean(np.mean(fitnesses_v2_sv2_full, axis=0),axis=1), '#aa0a0a', label='our idea (delta=2 & v2 & selv1 & 10 repitition)')
# plt.plot(gen_d2, np.mean(np.mean(fitnesses_tor10, axis=0),axis=1), '#55cc0a', label='our idea (delta=2 & version 2 & full injection type & 10 repetitions)')
plt.plot(gen_d2, np.mean(np.mean(fitnesses_tor10_v2_selv2_c2, axis=0),axis=1), '#55521a', label='our idea (delta=2 & v2 & selv2 & c2 & 10 repetitions)')
plt.plot(gen_d2, np.mean(np.mean(fitnesses_v2_selv2_c122, axis=0),axis=1), '#521212', label='our idea (delta=2 & v2 & selv2 & c1.22 & 10 repetitions)')
plt.plot(gen_d2, np.mean(np.mean(fitnesses_v2_selv1_tor5, axis=0),axis=1), '#0419ff', label='our idea (delta=2 & v2 & selv1 & 5 repetitions)')
plt.plot(gen_d2, np.mean(np.mean(fitnesses_v2_selv1_tor4, axis=0),axis=1), '#ba1341', label='our idea (delta=2 & v2 & selv1 & 4 repetitions)')
plt.plot(gen_d2, np.mean(np.mean(fitnesses_v2_selv1_tor3, axis=0),axis=1), '#a41a00', label='our idea (delta=2 & v2 & selv1 & 3 repetitions)')
plt.plot(gen_d2, np.mean(np.mean(fitnesses_v2_selv1_tor2, axis=0),axis=1), '#091301', label='our idea (delta=2 & v2 & selv1 & 2 repetitions)')
plt.plot(gen_d2, np.mean(np.mean(fitnesses_v2_selv1_tor1, axis=0),axis=1), '#faff4f', label='our idea (delta=2 & v2 & selv1 & 1 repetitions)')
# plt.plot(np.mean(np.mean(fitnesses_2d_10s, axis=0),axis=1), '#aaa000', label='our idea (delta=2 & 10 samples)')
# plt.plot(np.mean(np.mean(fitnesses_2d_20s, axis=0),axis=1), '#a2a020', label='our idea (delta=2 & 20 samples)')
# plt.plot(np.mean(np.mean(fitnesses_2d_30s, axis=0),axis=1), '#121020', label='our idea (delta=2 & 30 samples)')
# plt.plot(np.mean(np.mean(fitnesses_2d_40s, axis=0),axis=1), '#12f02f', label='our idea (delta=2 & 40 samples)')
# plt.plot(gen_d3, np.mean(np.mean(fitnesses_10s, axis=0),axis=1)[:-1], '#aaa123', label='our idea (delta=3 & 10 samples)')
# plt.plot(gen_d3, np.mean(np.mean(fitnesses_20s, axis=0),axis=1)[:-1], '#12ffff', label='our idea (delta=3 & 20 samples)')
# plt.plot(gen_d3, np.mean(np.mean(fitnesses_30s, axis=0),axis=1)[:-1], '#0ff2ff', label='our idea (delta=3 & 30 samples)')
# plt.plot(gen_d3, np.mean(np.mean(fitnesses_40s, axis=0),axis=1)[:-1], '#0f8241', label='our idea (delta=3 & 40 samples)')
# plt.plot(gen_d3, np.mean(np.mean(fitnesses, axis=0),axis=1)[:-1], 'black', label='our idea (delta=3)')
plt.plot(np.mean(np.mean(to_results[0], axis=0),axis=1), 'r',label='transfer idea (delta=2)')
# plt.plot(np.mean(np.mean(to_results_d3[0], axis=0),axis=1), 'g',label='transfer idea (delta=3)')
plt.plot(np.mean(ea_results[0], axis=1), 'y',label='ea idea')
plt.legend()
plt.xlabel('Generation')
plt.ylabel('fitness')
plt.title("Average of population's fitness during 30 repetition of Algorithm")
plt.show()
###Output
_____no_output_____
###Markdown
Evaluating causal metrics
###Code
import numpy as np
from matplotlib import pyplot as plt
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
import torchvision.datasets as datasets
import torchvision.models as models
from torch.nn.functional import conv2d
from utils import *
from evaluation import CausalMetric, auc, gkern
from explanations import RISE
cudnn.benchmark = True
# Load black box model for explanations
model = models.resnet50(True)
model = nn.Sequential(model, nn.Softmax(dim=1))
model = model.eval()
model = model.cuda()
for p in model.parameters():
p.requires_grad = False
# To use multiple GPUs
model = nn.DataParallel(model)
###Output
_____no_output_____
###Markdown
Preparing substrate functions For our causal metrics we need functions that define how we delete/insert pixels. Specifically, we define mapping from old pixels to new pixels. We use zero substrate for deletion and blurred image substrate for insertion.
###Code
klen = 11
ksig = 5
kern = gkern(klen, ksig)
# Function that blurs input image
blur = lambda x: nn.functional.conv2d(x, kern, padding=klen//2)
plt.figure(figsize=(12, 4))
plt.subplot(131)
plt.axis('off')
img = read_tensor('goldfish.jpg')
tensor_imshow(img[0])
plt.subplot(132)
plt.axis('off')
plt.imshow(kern[0, 0])
plt.subplot(133)
plt.axis('off')
tensor_imshow(blur(img)[0])
plt.show()
###Output
_____no_output_____
###Markdown
Creating metrics and explainer instances
###Code
insertion = CausalMetric(model, 'ins', 224, substrate_fn=blur)
deletion = CausalMetric(model, 'del', 224, substrate_fn=torch.zeros_like)
explainer = RISE(model, (224, 224))
explainer.generate_masks(N=5000, s=10, p1=0.1)
# 1 is for 'goldfish' class
sal = explainer(img.cuda())[1].cpu().numpy()
tensor_imshow(img[0])
plt.axis('off')
plt.title(get_class_name(1))
plt.imshow(sal, cmap='jet', alpha=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
Evaluating metrics for a single image Image on the left is the final image in the **deletion** process. It's an all-zero image for the network. It is gray instead of black, because we are setting pixels to $0$ in the space of normalized images. So after denormalization it becomes equal to the ImageNet mean which is $[0.485, 0.456, 0.406]$.
###Code
h = deletion.single_run(img, sal, verbose=1)
h = insertion.single_run(img, sal, verbose=1)
###Output
_____no_output_____
###Markdown
Qualitative and quantitative comparison of RISE, LIME and GradCAM
###Code
batch = 4
# Load saved explanations
gc = np.fromfile('../random-masking/gradcam/gc(resnet)_{:05}-{:05}'.format(batch*5000, (batch+1)*5000-1)).reshape((5000, 224, 224))
rm = np.fromfile('../random-masking/run02/explanations/exp_{:05}-{:05}.npy'.format(batch*5000, (batch+1)*5000-1)).reshape((5000, 224, 224))
li = np.fromfile('../random-masking/lime/lime_{:05}-{:05}.npy'.format((batch+1)*5000-2, (batch+1)*5000-1)).reshape((5000, 224, 224))
# Load images
dataset = datasets.ImageFolder('/scratch2/Datasets/imagenet/ILSVRC2012_val_folders/', preprocess)
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=250, shuffle=False,
num_workers=8, pin_memory=True, sampler=RangeSampler(range(5000*batch, 5000*(batch+1))))
images = np.empty((len(data_loader), 250, 3, 224, 224))
for j, (img, _) in enumerate(tqdm(data_loader, total=len(data_loader), desc='Loading images')):
images[j] = img
images = images.reshape((-1, 3, 224, 224))
def show_i(j, gc, rm, li, images):
plt.figure(figsize=(20, 5))
plt.subplot(141)
plt.axis('off')
p, c = torch.topk(model(torch.from_numpy(images[j:j+1]).float()), 1)
plt.title('{}: {:.1f}%'.format(get_class_name(c), 100 * float(p)))
tensor_imshow(torch.from_numpy(images[j:j+1])[0])
plt.subplot(142)
plt.axis('off')
tensor_imshow(torch.from_numpy(images[j:j+1])[0])
plt.imshow(rm[j], alpha=0.5, cmap='jet')
sc1d = auc(deletion.single_run(torch.from_numpy(images[j:j+1].astype('float32')), rm[j]))
sc1i = auc(insertion.single_run(torch.from_numpy(images[j:j+1].astype('float32')), rm[j]))
plt.title('RISE: {:.3f} / {:.3f}'.format(sc1d, sc1i))
plt.subplot(143)
plt.axis('off')
tensor_imshow(torch.from_numpy(images[j:j+1])[0])
plt.imshow(li[j], alpha=0.5, cmap='jet')
sc2d = auc(deletion.single_run(torch.from_numpy(images[j:j+1].astype('float32')), li[j]))
sc2i = auc(insertion.single_run(torch.from_numpy(images[j:j+1].astype('float32')), li[j]))
plt.title('LIME: {:.3f} / {:.3f}'.format(sc2d, sc2i))
plt.subplot(144)
plt.axis('off')
tensor_imshow(torch.from_numpy(images[j:j+1])[0])
plt.imshow(gc[j], alpha=0.5, cmap='jet')
sc3d = auc(deletion.single_run(torch.from_numpy(images[j:j+1].astype('float32')), gc[j]))
sc3i = auc(insertion.single_run(torch.from_numpy(images[j:j+1].astype('float32')), gc[j]))
plt.title('GCAM: {:.3f} / {:.3f}'.format(sc3d, sc3i))
plt.show()
for j in np.random.randint(0, 5001, 5):
# Image ID: 5000*batch + j
show_i(j, gc, rm, li, images)
###Output
_____no_output_____
###Markdown
Evaluate a batch of explanations
###Code
insertion = CausalMetric(model, 'ins', 224 * 8, substrate_fn=blur)
deletion = CausalMetric(model, 'del', 224 * 8, substrate_fn=torch.zeros_like)
scores = {'del': [], 'ins': []}
for i in range(2):
# Load batch of images
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=250, shuffle=False,
num_workers=8, pin_memory=True, sampler=RangeSampler(range(5000 * i, 5000 * (i + 1))))
images = np.empty((len(data_loader), 250, 3, 224, 224))
for j, (img, _) in enumerate(tqdm(data_loader, total=len(data_loader), desc='Loading images')):
images[j] = img
images = images.reshape((-1, 3, 224, 224))
# Load saved batch of explanations
exp = np.fromfile('../random-masking/run02/explanations/exp_{:05}-{:05}.npy'.format(i * 5000, (i + 1) * 5000 - 1)).reshape((5000, 224, 224))
# Evaluate deletion
h = deletion.evaluate(torch.from_numpy(images.astype('float32')), exp, 100)
scores['del'].append(auc(h.mean(1)))
# Evaluate insertion
h = insertion.evaluate(torch.from_numpy(images.astype('float32')), exp, 100)
scores['ins'].append(auc(h.mean(1)))
print('----------------------------------------------------------------')
print('Final:\nDeletion - {:.5f}\nInsertion - {:.5f}'.format(np.mean(scores['del']), np.mean(scores['ins'])))
###Output
_____no_output_____
###Markdown
Loading
###Code
from Data.data_dicts import character_dict, source_dict, random_state
model_name = 'microsoft/DialoGPT-small'
character = 'Vader' # 'Barney' | 'Sheldon' | 'Harry' | 'Fry' | 'Vader' | 'Joey' | 'Phoebe' | 'Bender' | Default'
character_2 = 'Harry'
# Mount google drive
import os
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
from google.colab import drive
drive.mount('/content/drive',force_remount=True)
base_folder = '/content/drive/My Drive/unibo/NLP_project/BarneyBot'
os.system("pip install datasets")
os.system("pip install transformers")
os.system("pip install rouge_score")
os.system("pip install -U sentence-transformers")
else:
base_folder = os.getcwd()
in_folder = os.path.join(base_folder, 'Data', 'Characters', character)
if not os.path.exists(in_folder):
os.makedirs(in_folder)
out_folder = os.path.join(base_folder, 'Data', 'Characters', character)
if not os.path.exists(out_folder):
os.makedirs(out_folder)
in_folder_2 = os.path.join(base_folder, 'Data', 'Characters', character_2)
if not os.path.exists(in_folder_2):
os.makedirs(in_folder_2)
out_folder_2 = os.path.join(base_folder, 'Data', 'Characters', character_2)
if not os.path.exists(out_folder_2):
os.makedirs(out_folder_2)
in_folder_def = os.path.join(base_folder, 'Data', 'Characters', 'Default')
if not os.path.exists(in_folder_def):
os.makedirs(in_folder_def)
out_folder_def = os.path.join(base_folder, 'Data', 'Characters', 'Default')
if not os.path.exists(out_folder_def):
os.makedirs(out_folder_def)
metrics_folder = os.path.join(base_folder, 'Metrics')
if not os.path.exists(metrics_folder):
os.makedirs(metrics_folder)
import pandas as pd
from tqdm import tqdm
import tensorflow as tf
import json
import numpy as np
import time
import scipy as sp
def save_as_json(filepath, filename, data):
if not os.path.exists(filepath):
os.makedirs(filepath, exist_ok=True)
with open(os.path.join(filepath, filename + ".json"), 'w') as f:
f.write(json.dumps(data, indent=4))
def load_from_json(filepath, filename):
if not os.path.exists(os.path.join(filepath, filename + '.json')):
return dict()
with open(os.path.join(filepath, filename + '.json'), 'r') as f:
return json.load(f)
from datasets import load_dataset, DatasetDict
def load_df(character):
dataset_path = os.path.join(base_folder, "Data", "Characters", character, character+'.csv')
character_hg = load_dataset('csv',
data_files=dataset_path,
cache_dir=os.path.join(base_folder, "cache"))
# 85% train / 10% test / 5% validation
train_test_hg = character_hg['train'].train_test_split(test_size=0.15, seed=random_state)
test_val = train_test_hg['test'].train_test_split(test_size=0.33, seed=random_state)
character_hg = DatasetDict({
'train': train_test_hg['train'],
'test': test_val['train'],
'val': test_val['test']
})
return character_hg
def construct_conv(row, tokenizer):
MAX_LENGTH = 512
row = list(reversed(list(row.values())))
model_inputs = tokenizer(row)
tokenizer_pad_token_id = tokenizer.encode('#')[0]
for i in range(len(model_inputs['input_ids'])):
model_inputs['input_ids'][i].append(tokenizer.eos_token_id)
model_inputs['attention_mask'][i].append(1)
model_inputs['input_ids'] = [item for sublist in model_inputs['input_ids'] for item in sublist]
model_inputs['attention_mask'] = [item for sublist in model_inputs['attention_mask'] for item in sublist]
if MAX_LENGTH > len(model_inputs['input_ids']):
model_inputs['input_ids'] += [tokenizer_pad_token_id] * (MAX_LENGTH - len(model_inputs['input_ids']))
model_inputs['attention_mask'] += [0] * (MAX_LENGTH - len(model_inputs['attention_mask']))
elif MAX_LENGTH < len(model_inputs['input_ids']):
model_inputs['input_ids'] = model_inputs['input_ids'][:MAX_LENGTH-1]
model_inputs['input_ids'][-1] = tokenizer.eos_token_id
model_inputs['attention_mask'] = model_inputs['attention_mask'][:MAX_LENGTH-1]
model_inputs['attention_mask'][-1] = 1
model_inputs["labels"] = model_inputs["input_ids"]
return model_inputs
def preprocess_function(examples):
tokenizer.pad_token = '#'
model_inputs = construct_conv(examples, tokenizer)
return model_inputs
os.environ["HF_DATASETS_CACHE"] = os.path.join(base_folder, "cache")
character_hg = load_df(character)
checkpoint_folder = os.path.join(out_folder, character_dict[character]['checkpoint_folder'])
checkpoint_folder_2 = os.path.join(out_folder_2, character_dict[character_2]['checkpoint_folder'])
from transformers import TFAutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir=os.path.join(base_folder, "cache"))
tokenizer.pad_token = '#'
model = TFAutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=checkpoint_folder)
model.compile(optimizer=AdamWeightDecay(learning_rate=2e-5))
model_2 = TFAutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=checkpoint_folder_2)
model_2.compile(optimizer=AdamWeightDecay(learning_rate=2e-5))
model_def = TFAutoModelForCausalLM.from_pretrained(model_name, cache_dir=os.path.join(base_folder, "cache"))
model_def.compile(optimizer=AdamWeightDecay(learning_rate=2e-5))
from transformers import DataCollatorForLanguageModeling
from transformers import AdamWeightDecay
batch_size = 8
data_collator = DataCollatorForLanguageModeling(mlm=False, tokenizer=tokenizer, return_tensors='tf')
tokenized_character_hg = character_hg.map(preprocess_function, batched=False)
encoded_test_set = tokenized_character_hg["test"].to_tf_dataset(
columns=["input_ids", "attention_mask", "labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator,
)
###Output
No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour, please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss.
Loading cached processed dataset at D:\University\Esami da Superare\Natural Language Processing\BarneyBot\BarneyBot\cache\csv\default-8c85b46caa75ae36\0.0.0\433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\cache-78959818854f4741.arrow
Loading cached processed dataset at D:\University\Esami da Superare\Natural Language Processing\BarneyBot\BarneyBot\cache\csv\default-8c85b46caa75ae36\0.0.0\433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\cache-f484e4ffd2f07ebe.arrow
Loading cached processed dataset at D:\University\Esami da Superare\Natural Language Processing\BarneyBot\BarneyBot\cache\csv\default-8c85b46caa75ae36\0.0.0\433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\cache-8dab56c64be0533d.arrow
###Markdown
Metrics Preparation
###Code
sample_questions = character_hg['test']['context']
n_beams = 3
top_k = 50
top_p = 0.92
def get_predictions_cached(sample_questions, model, filename, generation_method, override_predictions=False):
prediction_path = os.path.join(in_folder, filename)
if os.path.exists(prediction_path) and not override_predictions:
print("Loading predictions from stored file")
with open(prediction_path, 'r') as file:
json_string = file.read()
predictions = json.loads(json_string)
print("Loaded predictions from stored file")
else:
print("Creating predictions")
predictions = list()
for x in tqdm(sample_questions):
tokenized_question = tokenizer.encode(x + tokenizer.eos_token, return_tensors='tf')
max_length = 128 + tokenized_question.shape[1]
if generation_method == "Greedy":
generated_answer = model.generate(tokenized_question,
pad_token_id=tokenizer.eos_token_id, max_length=max_length)[0].numpy().tolist()
elif generation_method == "Beam Search":
generated_answer = model.generate(tokenized_question,
pad_token_id=tokenizer.eos_token_id, max_length=max_length,
n_beams=n_beams)[0].numpy().tolist()
elif generation_method == "Sampling":
b = True
c = 0
while b:
generated_answer = model.generate(tokenized_question,
pad_token_id=tokenizer.eos_token_id, max_length=max_length,
do_sample=True, top_k=top_k, top_p=top_p)[0].numpy().tolist()
c += 1
if len(generated_answer[len(tokenized_question[0]):])>1:
b = False
if c>100:
generated_answer[len(tokenized_question[0]):] = tokenizer.encode('hi') + [tokenizer.eos_token_id]
break
predictions.append(generated_answer[len(tokenized_question[0]):])
# Save predictions as a JSON file
output_string = json.dumps(predictions)
with open(prediction_path, 'w') as file:
file.write(output_string)
assert all([len(p)>1 for p in predictions])
return predictions
predictions_greedy = get_predictions_cached(sample_questions, model,
character_dict[character]['prediction_filename'] + '_greedy.json',
"Greedy")
predictions_nbeams = get_predictions_cached(sample_questions, model,
character_dict[character]['prediction_filename'] + '_nbeams.json',
"Beam Search")
predictions_sampling = get_predictions_cached(sample_questions, model,
character_dict[character]['prediction_filename'] + '_sampling.json',
"Sampling")
def get_dataframe_for_metrics(data_test, predictions_greedy, predictions_nbeams, predictions_sampling):
i = 0
df = {'ctx':[], 'ctx_tk':[]}
has_labels = 'response' in data_test.features
if has_labels:
df['lbl'] = []
df['lbl_tk'] = []
if predictions_greedy:
df['prd_greedy'] = []
df['prd_greedy_tk'] = []
if predictions_nbeams:
df['prd_nbeams'] = []
df['prd_nbeams_tk'] = []
if predictions_sampling:
df['prd_sampling'] = []
df['prd_sampling_tk'] = []
for sample in tqdm(data_test):
# encode the context and label sentences, add the eos_token and return a tensor
ctx_tk = tokenizer.encode(sample['context'] + tokenizer.eos_token, return_tensors='tf').numpy().tolist()
ctx = sample['context']
df['ctx_tk'].append(ctx_tk)
df['ctx'].append(ctx)
if has_labels:
lbl_tk = tokenizer.encode(sample['response'] + tokenizer.eos_token, return_tensors='tf').numpy().tolist()
lbl = sample['response']
df['lbl'].append(lbl)
df['lbl_tk'].append(lbl_tk)
if predictions_greedy:
prd_greedy_tk = predictions_greedy[i]
prd_greedy = tokenizer.decode(prd_greedy_tk, skip_special_tokens=True)
df['prd_greedy'].append(prd_greedy)
df['prd_greedy_tk'].append(prd_greedy_tk)
if predictions_nbeams:
prd_nbeams_tk = predictions_nbeams[i]
prd_nbeams = tokenizer.decode(prd_nbeams_tk, skip_special_tokens=True)
df['prd_nbeams'].append(prd_nbeams)
df['prd_nbeams_tk'].append(prd_nbeams_tk)
if predictions_sampling:
prd_sampling_tk = predictions_sampling[i]
prd_sampling = tokenizer.decode(prd_sampling_tk, skip_special_tokens=True)
df['prd_sampling'].append(prd_sampling)
df['prd_sampling_tk'].append(prd_sampling_tk)
i += 1
return pd.DataFrame(data=df)
df_char = get_dataframe_for_metrics(character_hg['test'], predictions_greedy, predictions_nbeams, predictions_sampling)
df_char
###Output
100%|█████████████████████████████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 105.18it/s]
###Markdown
Metrics For Character 1
###Code
def ccl_sim(ctx_lbl, ctx_cht, lbl_cht):
return ((1 - abs(ctx_lbl - ctx_cht))**2 + lbl_cht**2) / 2
from Lib.BBMetrics import BBMetric
def compute_set_metrics(model, model_2, character, character_2, test_set_name,
context_sentences, label_responses, chatbot_responses, encoded_test_set,
classifier_n_sentences=50, label_chatbot_symmetry=False,
include_qualitative_sentences=False, verbose=True):
scores = {}
lbl_text = 'label' if not label_chatbot_symmetry else 'chatbota'
cht_text = 'chatbot' if not label_chatbot_symmetry else 'chatbotb'
scores['metadata'] = {}
scores['metadata']['dataset name'] = test_set_name
scores['metadata']['names'] = {
'context':'context'
}
if label_chatbot_symmetry:
scores['metadata']['names'][lbl_text] = character
scores['metadata']['names'][cht_text] = character_2
else:
scores['metadata']['names'][lbl_text] = 'label'
scores['metadata']['names'][cht_text] = character
# 0) computes metrics for perplexity
metric = BBMetric.load_metric("semantic similarity")
scores['semantic similarity'] = [metric.compute(sentences_a=context_sentences,
sentences_b=label_responses)]
scores['semantic similarity'].append(metric.compute(sentences_a=context_sentences,
sentences_b=chatbot_responses)),
scores['semantic similarity'].append(metric.compute(sentences_a=label_responses,
sentences_b=chatbot_responses))
scores['semantic similarity'].append(ccl_sim(scores['semantic similarity'][0]['score'],
scores['semantic similarity'][1]['score'],
scores['semantic similarity'][2]['score']))
scores['metadata']['semantic similarity'] = {
'ordering': ['context-'+lbl_text, 'context-'+cht_text, cht_text+'-'+lbl_text, 'ccl']
}
if verbose:
print('=== SEMANTIC SIMILARITY ===')
print('context-'+lbl_text+' similarity: ', scores['semantic similarity'][0])
print('context-'+cht_text+' similarity: ', scores['semantic similarity'][1])
print(cht_text+'-'+lbl_text+' similarity: ', scores['semantic similarity'][2])
print('ccl-sim similarity: ', scores['semantic similarity'][3])
# 1) computes metrics for perplexity
if encoded_test_set is not None:
metric = BBMetric.load_metric("perplexity")
if not label_chatbot_symmetry:
scores['perplexity'] = metric.compute(model=model, encoded_test_set=encoded_test_set)['score']
scores['metadata']['perplexity'] = {
'ordering': cht_text
}
else:
scores['perplexity'] = [metric.compute(model=model, encoded_test_set=encoded_test_set)['score']]
scores['perplexity'].append(metric.compute(model=model_2, encoded_test_set=encoded_test_set)['score'])
scores['metadata']['perplexity'] = {
'ordering': [lbl_text, cht_text]
}
if verbose:
print('=== PERPLEXITY ===')
if label_chatbot_symmetry:
print(lbl_text + ' perplexity: ', scores['perplexity'][0])
print(cht_text + ' perplexity: ', scores['perplexity'][1])
else:
print(cht_text + ' perplexity: ', scores['perplexity'])
elif verbose:
print("encoded_test_set not provided, skipping Perplexity.")
# 2) computes metrics for bleu
metric = BBMetric.load_metric("bleu")
scores['bleu'] = [metric.compute(predictions=label_responses, references=context_sentences)]
scores['bleu'].append(metric.compute(predictions=chatbot_responses, references=context_sentences))
scores['bleu'].append(metric.compute(predictions=chatbot_responses, references=label_responses))
scores['bleu'].append(ccl_sim(scores['bleu'][0]['score'],
scores['bleu'][1]['score'],
scores['bleu'][2]['score']))
scores['metadata']['bleu'] = {
'ordering': ['context-'+lbl_text, 'context-'+cht_text, cht_text+'-'+lbl_text, 'ccl']
}
if verbose:
print('=== BLEU ===')
print('context-to-'+lbl_text+' bleu: ', scores['bleu'][0])
print('context-to-'+cht_text+' bleu: ', scores['bleu'][1])
print(lbl_text+'-to-'+cht_text+' bleu: ', scores['bleu'][2])
print('ccl-sim bleu: ', scores['bleu'][3])
# 3) computes metrics for rouge-L
metric = BBMetric.load_metric("rouge l")
scores['rouge l'] = [metric.compute(predictions=label_responses, references=context_sentences)]
scores['rouge l'].append(metric.compute(predictions=chatbot_responses, references=context_sentences))
scores['rouge l'].append(metric.compute(predictions=chatbot_responses, references=label_responses))
scores['rouge l'].append(ccl_sim(scores['rouge l'][0]['score'],
scores['rouge l'][1]['score'],
scores['rouge l'][2]['score']))
scores['metadata']['rouge l'] = {
'ordering': ['context-'+lbl_text, 'context-'+cht_text, cht_text+'-'+lbl_text, 'ccl']
}
if verbose:
print('=== ROUGE-L ===')
print('context-to-'+lbl_text+' rouge: ', scores['rouge l'][0])
print('context-to-'+cht_text+' rouge: ', scores['rouge l'][1])
print(lbl_text+'-to-'+cht_text+' rouge: ', scores['rouge l'][2])
print('ccl-sim rouge: ', scores['rouge l'][3])
# 4) computes metrics for distinct
metric = BBMetric.load_metric("distinct")
scores['distinct'] = [metric.compute(sentences=context_sentences)]
scores['distinct'].append(metric.compute(sentences=label_responses))
scores['distinct'].append(metric.compute(sentences=chatbot_responses))
scores['metadata']['distinct'] = {
'ordering': ['context', lbl_text, cht_text]
}
if verbose:
print('=== DISTINCT ===')
print('context distinct: ', scores['distinct'][0])
print(lbl_text+' distinct: ', scores['distinct'][1])
print(cht_text+' distinct: ', scores['distinct'][2])
# 6) computes emotion metric
metric = BBMetric.load_metric("emotion")
scores['emotion'] = [metric.compute(sentences=context_sentences)]
scores['emotion'].append(metric.compute(sentences=label_responses))
scores['emotion'].append(metric.compute(sentences=chatbot_responses))
scores['emotion'].append(sp.stats.stats.pearsonr(scores['emotion'][1]['score'],
scores['emotion'][2]['score'])[0])
scores['metadata']['emotion'] = {
'ordering': ['context-'+lbl_text, 'context-'+cht_text, cht_text+'-'+lbl_text, cht_text+'-'+lbl_text+' correlation']
}
if verbose:
print('=== EMOTION ===')
print('context emotions: \n', list(zip(scores['emotion'][0]['label'], scores['emotion'][0]['score'])))
print(lbl_text+' emotions: \n', list(zip(scores['emotion'][1]['label'], scores['emotion'][1]['score'])))
print(cht_text+' emotions: \n', list(zip(scores['emotion'][2]['label'], scores['emotion'][2]['score'])))
print(lbl_text+'-'+cht_text+'emotion corr: \n', scores['emotion'][3])
# 8) computes sas metric
metric = BBMetric.load_metric("semantic answer similarity")
scores['semantic answer similarity'] = [metric.compute(predictions=context_sentences,
references=label_responses)]
scores['semantic answer similarity'].append(metric.compute(predictions=context_sentences,
references=chatbot_responses))
scores['semantic answer similarity'].append(metric.compute(predictions=label_responses,
references=chatbot_responses))
scores['semantic answer similarity'].append(ccl_sim(scores['semantic answer similarity'][0]['score'],
scores['semantic answer similarity'][1]['score'],
scores['semantic answer similarity'][2]['score']))
scores['metadata']['semantic answer similarity'] = {
'ordering': ['context-'+lbl_text, 'context-'+cht_text, cht_text+'-'+lbl_text, 'ccl']
}
if verbose:
print('=== SAS ===')
print('context-'+lbl_text+' sas: ', scores['semantic answer similarity'][0])
print('context-'+cht_text+' sas: ', scores['semantic answer similarity'][1])
print(lbl_text+'-'+cht_text+' sas: ', scores['semantic answer similarity'][2])
print('ccl-sim sas: ', scores['semantic answer similarity'][3])
# 9) computes metrics for semantic classifier
metric = BBMetric.load_metric("semantic classifier")
start_time = time.time()
scores['semantic classifier'] = [metric.compute(character=character, character_dict=character_dict,
base_folder=base_folder, sentences=label_responses,
n_sentences=classifier_n_sentences)]
scores['semantic classifier'].append(metric.compute(character=character, character_dict=character_dict,
base_folder=base_folder, sentences=chatbot_responses,
n_sentences=classifier_n_sentences))
end_time = time.time()
scores['metadata']['semantic classifier'] = {
'ordering': [lbl_text, cht_text]
}
if verbose:
print('=== SEMANTIC CLASSIFIER ===')
print('sem-classifier '+lbl_text+': ', scores['semantic classifier'][0])
print('sem-classifier '+cht_text+': ', scores['semantic classifier'][1])
print('time elapsed computing semantic classifier: {:.2f} s'.format(end_time - start_time))
if not label_chatbot_symmetry and os.path.exists(os.path.join(os.getcwd(), "Data", "Characters", character, "humancoherence.csv")):
scores['human'] = {}
metric = BBMetric.load_metric("human - coherence")
scores['human']['coherence'] = metric.compute(filepath=os.path.join(os.getcwd(), "Data", "Characters",
character, "humancoherence.csv"))
metric = BBMetric.load_metric("human - style")
scores['human']['style'] = metric.compute(filepath=os.path.join(os.getcwd(), "Data", "Characters",
character, "humanstyle.csv"))
metric = BBMetric.load_metric("human - consistency")
scores['human']['consistency'] = metric.compute(filepath=os.path.join(os.getcwd(), "Data", "Characters",
character, "humanconsistency.csv"))
scores['metadata']['human'] = {
'ordering': {
'coherence': cht_text,
'consistency': cht_text,
'style': cht_text
}
}
if verbose:
print('=== HUMAN METRICS ===')
print('coherence: ', scores['human']['coherence'])
print('consistency: ', scores['human']['consistency'])
print('style: ', scores['human']['style'])
elif verbose:
print("Symmetric mode, skipping Human metrics.")
if include_qualitative_sentences:
sentences_df = {}
sentences_df['context'] = context_sentences
sentences_df[lbl_text] = label_responses
sentences_df[cht_text] = chatbot_responses
scores['sentences'] = sentences_df
if verbose:
print('=== SENTENCES ===')
for i in range(len(context_sentences)):
print("* context: ", context_sentences[i])
print("* " + lbl_text + ":", label_responses[i])
print("* " + cht_text + ":", chatbot_responses[i])
print()
elif verbose:
print("Skipping sentence outputting.")
return scores
"""
set_size = 10
i = 30
print("##### Set (Size " + str(set_size) + ") #####")
context_sentences = list(df_char['ctx'][i:i+set_size])
chatbot_responses = list(df_char['prd_greedy'][i:i+set_size])
label_responses = list(df_char['lbl'][i:i+set_size])
compute_set_metrics(model, None,
context_sentences, label_responses, chatbot_responses, character, encoded_test_set)
"""
print("##### Full Test Set #####")
context_sentences = list(df_char['ctx'])
chatbot_responses = list(df_char['prd_greedy'])
label_responses = list(df_char['lbl'])
scores = compute_set_metrics(model, None,
character, None, character + " dataset",
context_sentences, label_responses, chatbot_responses, encoded_test_set,
classifier_n_sentences=75)
print(scores)
save_as_json(metrics_folder, character+'_base_metrics', scores)
###Output
_____no_output_____
###Markdown
Metrics Between Different Sampling Methods
###Code
scores = {}
split = True
print("##### Greedy vs. N-Beams #####")
context_sentences = list(df_char['ctx'])
greedy_responses = list(df_char['prd_greedy'])
nbeams_responses = list(df_char['prd_nbeams'])
scores['greedy_vs_nbeams'] = compute_set_metrics(None, None,
character, character, character + " dataset",
context_sentences,
greedy_responses,
nbeams_responses,
None,
classifier_n_sentences=75, label_chatbot_symmetry=True)
if split == True:
save_as_json(metrics_folder, character+'_greedy_vs_nbeams_metrics', scores['greedy_vs_nbeams'])
print("##### Greedy vs. Sampling #####")
context_sentences = list(df_char['ctx'])
greedy_responses = list(df_char['prd_greedy'])
sampling_responses = list(df_char['prd_sampling'])
scores['greedy_vs_sampling'] = compute_set_metrics(None, None,
character, character, character + " dataset",
context_sentences,
greedy_responses,
sampling_responses,
None,
classifier_n_sentences=75, label_chatbot_symmetry=True)
if split == True:
save_as_json(metrics_folder, character+'_greedy_vs_sampling_metrics', scores['greedy_vs_sampling'])
print("##### N-Beams vs. Sampling #####")
context_sentences = list(df_char['ctx'])
nbeams_responses = list(df_char['prd_nbeams'])
sampling_responses = list(df_char['prd_sampling'])
scores['nbeams_vs_sampling'] = compute_set_metrics(None, None,
character, character, character + " dataset",
context_sentences,
nbeams_responses,
sampling_responses,
None,
classifier_n_sentences=75, label_chatbot_symmetry=True)
if split == True:
save_as_json(metrics_folder, character+'_nbeams_vs_sampling_metrics', scores['nbeams_vs_sampling'])
if split == True:
scores = {}
scores['greedy_vs_nbeams'] = load_from_json(
filepath=metrics_folder,
filename=character+'_greedy_vs_nbeams_metrics'
)
scores['greedy_vs_sampling'] = load_from_json(
filepath=metrics_folder,
filename=character+'_greedy_vs_sampling_metrics'
)
scores['nbeams_vs_sampling'] = load_from_json(
filepath=metrics_folder,
filename=character+'_nbeams_vs_sampling_metrics'
)
os.remove(os.path.join(
metrics_folder,
character+'_greedy_vs_nbeams_metrics.json'
))
os.remove(os.path.join(
metrics_folder,
character+'_greedy_vs_sampling_metrics.json'
))
os.remove(os.path.join(
metrics_folder,
character+'_nbeams_vs_sampling_metrics.json'
))
save_as_json(metrics_folder, character+'_sampling_comparison_metrics', scores)
###Output
_____no_output_____
###Markdown
Metrics Between Character vs Non-Finetuned
###Code
predictions_def_sampling = get_predictions_cached(sample_questions, model_def,
os.path.join(in_folder_def, 'from_' + character + '_df_' + '_sampling.json'),
"Sampling", override_predictions=True)
df_char_def = get_dataframe_for_metrics(character_hg['test'], None, None, predictions_def_sampling)
"""
for i in range(1):
print("##### Sample " + str(i+1) + " #####")
context_sentence = df_char['ctx'][i]
character_response = df_char['prd_sampling'][i]
default_response = df_char_def['prd_sampling'][i]
compute_sample_metrics(context_sentence, default_response, character_response, label_chatbot_symmetry=True)
print()
"""
"""
set_size = 50
i = 30
print("##### Set (Size " + str(set_size) + ") #####")
context_sentences = list(df_char['ctx'][i:i+set_size])
character_responses = list(df_char['prd_sampling'][i:i+set_size])
default_responses = list(df_char_def['prd_sampling'][i:i+set_size])
compute_set_metrics(None, None,
context_sentences, default_responses, character_responses, character, label_chatbot_symmetry=True)
"""
print("##### Full Test Set #####")
context_sentences = list(df_char['ctx'])
character_responses = list(df_char['prd_sampling'])
default_responses = list(df_char_def['prd_sampling'])
scores = compute_set_metrics(model, model_def, character, 'Default', character + " dataset",
context_sentences,
character_responses,
default_responses,
encoded_test_set,
classifier_n_sentences=75,
label_chatbot_symmetry=True)
save_as_json(metrics_folder, character+'_vs_nonfinetuned_metrics', scores)
###Output
_____no_output_____
###Markdown
Metrics Between Character 1 & Character 2
###Code
def get_predictions_small(sample_questions, model, generation_method):
print("Creating predictions")
predictions = list()
for x in tqdm(sample_questions):
tokenized_question = tokenizer.encode(x + tokenizer.eos_token, return_tensors='tf')
max_length = 128 + tokenized_question.shape[1]
if generation_method == "Greedy":
generated_answer = model.generate(tokenized_question,
pad_token_id=tokenizer.eos_token_id, max_length=max_length)[0].numpy().tolist()
elif generation_method == "Beam Search":
generated_answer = model.generate(tokenized_question,
pad_token_id=tokenizer.eos_token_id, max_length=max_length,
n_beams=n_beams)[0].numpy().tolist()
elif generation_method == "Sampling":
b = True
c = 0
while b:
generated_answer = model.generate(tokenized_question,
pad_token_id=tokenizer.eos_token_id, max_length=max_length,
do_sample=True, top_k=top_k, top_p=top_p)[0].numpy().tolist()
c+= 1
if len(generated_answer[len(tokenized_question[0]):])>1:
b = False
if c>100:
generated_answer[len(tokenized_question[0]):] = tokenizer.encode('hi') + [tokenizer.eos_token_id]
break
predictions.append(generated_answer[len(tokenized_question[0]):])
assert all([len(p)>1 for p in predictions])
return predictions
df_common = load_dataset('csv',
data_files=os.path.join(base_folder, 'Data', 'common_dataset.csv'),
cache_dir=os.path.join(base_folder, "cache"))
df_common = df_common.remove_columns(['source'])
tokenized_common_hg = df_common['train'].map(preprocess_function, batched=False)
encoded_common_set = tokenized_common_hg.to_tf_dataset(
columns=["input_ids", "attention_mask", "labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator,
)
df_common
encoded_common_set
predictions_1_sampling = get_predictions_small(df_common['train']['context'], model, "Sampling")
predictions_2_sampling = get_predictions_small(df_common['train']['context'], model_2, "Sampling")
df_common_char_1 = get_dataframe_for_metrics(df_common['train'], None, None, predictions_1_sampling)
df_common_char_2 = get_dataframe_for_metrics(df_common['train'], None, None, predictions_2_sampling)
print("##### " + character + " Vs. " + character_2 + " #####")
context_sentences = list(df_common_char_1['ctx'])
chatbot_responses = list(df_common_char_1['prd_sampling'])
chatbot_2_responses = list(df_common_char_2['prd_sampling'])
scores = compute_set_metrics(model, model_2, character, character_2, "common small dataset",
context_sentences, chatbot_responses, chatbot_2_responses, encoded_common_set,
include_qualitative_sentences=True, label_chatbot_symmetry=True)
save_as_json(metrics_folder, character+'_vs_'+character_2+'_metrics', scores)
###Output
_____no_output_____
###Markdown
Preparación de datos de test
###Code
import pickle as pckl
input_path = "/home/ubuntu/tfm/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export-new-parsed/data_train_night.txt"
output_path = "/home/ubuntu/tfm/TrainYourOwnYOLO/Data/Source_Images/Training_Images/vott-csv-export-new-parsed/data_test_night.pckl"
mapper = {0:'Panel', 1:'Dedo'}
rows = []
with open(input_path) as fd:
for item in fd:
filename_and_boxes = item.rstrip('\n').split(' ')
filename = filename_and_boxes[0]
boxes = filename_and_boxes[1:]
d = {'filename': filename, 'object':[]}
for box in boxes:
box = box.split(',')
d['object'].append({'xmin':int(box[0]), 'ymin':int(box[1]), 'xmax': int(box[2]), 'ymax': int(box[3]), 'name': mapper[int(box[4])]})
rows.append(d)
pckl.dump(rows, open(output_path, 'wb'))
###Output
_____no_output_____
###Markdown
Dependencias
###Code
import argparse
import json
import pickle as pckl
import numpy as np
import os
import cv2
import pandas as pd
from PIL import Image
from scipy.special import expit
from yolo3.yolo import YOLO
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Cargar modelo
###Code
def load_model(model_path, classes_path, anchors_path):
yolo = YOLO(
**{
"model_path": model_path,
"anchors_path": anchors_path,
"classes_path": classes_path,
"score": 0.5,
"gpu_num": 1,
"model_image_size": (416, 416),
}
)
return yolo
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
class BoundBox:
def __init__(self, xmin, ymin, xmax, ymax, c = None, classes = None):
self.xmin = xmin
self.ymin = ymin
self.xmax = xmax
self.ymax = ymax
self.c = c
self.classes = classes
self.label = -1
self.score = -1
def get_label(self):
if self.label == -1:
self.label = np.argmax(self.classes)
return self.label
def get_score(self):
if self.score == -1:
self.score = self.classes[self.get_label()]
return self.score
def _interval_overlap(interval_a, interval_b):
x1, x2 = interval_a
x3, x4 = interval_b
if x3 < x1:
if x4 < x1:
return 0
else:
return min(x2,x4) - x1
else:
if x2 < x3:
return 0
else:
return min(x2,x4) - x3
def bbox_iou(box1, box2):
intersect_w = _interval_overlap([box1.xmin, box1.xmax], [box2.xmin, box2.xmax])
intersect_h = _interval_overlap([box1.ymin, box1.ymax], [box2.ymin, box2.ymax])
intersect = intersect_w * intersect_h
w1, h1 = box1.xmax-box1.xmin, box1.ymax-box1.ymin
w2, h2 = box2.xmax-box2.xmin, box2.ymax-box2.ymin
union = w1*h1 + w2*h2 - intersect
return float(intersect) / union
###Output
_____no_output_____
###Markdown
Generador de lotes
###Code
class BatchGenerator():
def __init__(self, instances, anchors, labels, batch_size=1, shuffle=True):
self.instances = instances
self.batch_size = batch_size
self.labels = labels
self.anchors = [BoundBox(0, 0, anchors[2*i], anchors[2*i+1]) for i in range(len(anchors)//2)]
if shuffle:
np.random.shuffle(self.instances)
def num_classes(self):
return len(self.labels)
def size(self):
return len(self.instances)
def get_anchors(self):
anchors = []
for anchor in self.anchors:
anchors += [anchor.xmax, anchor.ymax]
return anchors
def load_annotation(self, i):
annots = []
for obj in self.instances[i]['object']:
annot = [obj['xmin'], obj['ymin'], obj['xmax'], obj['ymax'], self.labels.index(obj['name'])]
annots += [annot]
if len(annots) == 0: annots = [[]]
return np.array(annots)
def load_image(self, i):
return cv2.imread(self.instances[i]['filename'])
###Output
_____no_output_____
###Markdown
Detection
###Code
def do_nms(boxes, nms_thresh):
if len(boxes) > 0:
nb_class = len(boxes[0].classes)
else:
return
for c in range(nb_class):
sorted_indices = np.argsort([-box.classes[c] for box in boxes])
for i in range(len(sorted_indices)):
index_i = sorted_indices[i]
if boxes[index_i].classes[c] == 0: continue
for j in range(i+1, len(sorted_indices)):
index_j = sorted_indices[j]
if bbox_iou(boxes[index_i], boxes[index_j]) >= nms_thresh:
boxes[index_j].classes[c] = 0
def get_yolo_boxes(model, images, net_h, net_w, nms_thresh):
batch_output, data = model.detect_image(Image.fromarray(images[0].astype('uint8')))
boxes = []
for bo in batch_output:
b = [0]*2
b[bo[4]] = bo[5]
box = bo[:4] + [bo[5]] + [b]
boxes.append(BoundBox(box[0], box[1], box[2], box[3], box[4], box[5]))
# image_h, image_w, _ = images[0].shape
# correct_yolo_boxes(boxes, image_h, image_w, net_h, net_w)
do_nms(boxes, nms_thresh)
return [boxes]
def detection(model, generator, nms_thresh=0.5, net_h=416, net_w=416):
# gather all detections and annotations
all_detections = [[None for i in range(generator.num_classes())] for j in range(generator.size())]
all_annotations = [[None for i in range(generator.num_classes())] for j in range(generator.size())]
for i in range(generator.size()):
raw_image = [generator.load_image(i)]
# make the boxes and the labels
pred_boxes = get_yolo_boxes(model, raw_image, net_h, net_w, nms_thresh)[0]
score = np.array([box.get_score() for box in pred_boxes])
pred_labels = np.array([box.label for box in pred_boxes])
if len(pred_boxes) > 0:
pred_boxes = np.array([[box.xmin, box.ymin, box.xmax, box.ymax, box.get_score()] for box in pred_boxes])
else:
pred_boxes = np.array([[]])
# sort the boxes and the labels according to scores
score_sort = np.argsort(-score)
pred_labels = pred_labels[score_sort]
pred_boxes = pred_boxes[score_sort]
# copy detections to all_detections
for label in range(generator.num_classes()):
all_detections[i][label] = pred_boxes[pred_labels == label, :]
annotations = generator.load_annotation(i)
# copy detections to all_annotations
for label in range(generator.num_classes()):
all_annotations[i][label] = annotations[annotations[:, 4] == label, :4].copy()
return all_detections, all_annotations
###Output
_____no_output_____
###Markdown
Evaluation
###Code
def compute_overlap(a, b):
"""
Code originally from https://github.com/rbgirshick/py-faster-rcnn.
Parameters
----------
a: (N, 4) ndarray of float
b: (K, 4) ndarray of float
Returns
-------
overlaps: (N, K) ndarray of overlap between boxes and query_boxes
"""
area = (b[:, 2] - b[:, 0]) * (b[:, 3] - b[:, 1])
iw = np.minimum(np.expand_dims(a[:, 2], axis=1), b[:, 2]) - np.maximum(np.expand_dims(a[:, 0], 1), b[:, 0])
ih = np.minimum(np.expand_dims(a[:, 3], axis=1), b[:, 3]) - np.maximum(np.expand_dims(a[:, 1], 1), b[:, 1])
iw = np.maximum(iw, 0)
ih = np.maximum(ih, 0)
ua = np.expand_dims((a[:, 2] - a[:, 0]) * (a[:, 3] - a[:, 1]), axis=1) + area - iw * ih
ua = np.maximum(ua, np.finfo(float).eps)
intersection = iw * ih
return intersection / ua
def compute_ap(recall, precision):
""" Compute the average precision, given the recall and precision curves.
Code originally from https://github.com/rbgirshick/py-faster-rcnn.
# Arguments
recall: The recall curve (list).
precision: The precision curve (list).
# Returns
The average precision as computed in py-faster-rcnn.
"""
# correct AP calculation
# first append sentinel values at the end
mrec = np.concatenate(([0.], recall, [1.]))
mpre = np.concatenate(([0.], precision, [0.]))
# compute the precision envelope
for i in range(mpre.size - 1, 0, -1):
mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
# to calculate area under PR curve, look for points
# where X axis (recall) changes value
i = np.where(mrec[1:] != mrec[:-1])[0]
# and sum (\Delta recall) * prec
ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
return ap
def evaluation(all_detections, all_annotations, generator, iou_threshold=0.5):
average_precisions = []
for label in range(generator.num_classes()):
false_positives = np.zeros((0,))
true_positives = np.zeros((0,))
scores = np.zeros((0,))
num_annotations = 0.0
for i in range(generator.size()):
detections = all_detections[i][label]
annotations = all_annotations[i][label]
num_annotations += annotations.shape[0]
detected_annotations = []
for d in detections:
scores = np.append(scores, d[4])
if annotations.shape[0] == 0: # Si no hay anotación de esa detección es un falso positivo
false_positives = np.append(false_positives, 1)
true_positives = np.append(true_positives, 0)
continue
overlaps = compute_overlap(np.expand_dims(d, axis=0), annotations) # IOU, tiene el consideración todas las anotaciones
assigned_annotation = np.argmax(overlaps, axis=1) # Se queda con la anotación que maximiza el IOU
max_overlap = overlaps[0, assigned_annotation] # Se queda con el valor del IOU se esta anotación
if max_overlap >= iou_threshold and assigned_annotation not in detected_annotations: # Comprueba si esa anotación no ha sido ya asignada a una detección (además de comprobar que el IOU supera un cierto umbral). Las detecciones están ordenadas por score descendente por lo que se quedaría primero la que tiene mayor score (aunque luego pueda tener menor IoU).
false_positives = np.append(false_positives, 0)
true_positives = np.append(true_positives, 1)
detected_annotations.append(assigned_annotation) # Guarda la anotación para que no pueda volver a ser usada
else: # IOU por debajo del umbral o anotación ya utilizada
false_positives = np.append(false_positives, 1)
true_positives = np.append(true_positives, 0)
# no annotations -> AP for this class is 0 (is this correct?)
if num_annotations == 0:
average_precisions[label] = 0
continue
# sort by score (Esto lo hace para ser consistente con los vectores de anotación y detección)
indices = np.argsort(-scores)
false_positives = false_positives[indices]
true_positives = true_positives[indices]
annotations_pending = num_annotations - np.sum(true_positives)
# compute false positives and true positives (Esto es lo mismo que sumar unos y ceros de cada una de los vectores pero se hace así para computar el AP)
false_positives = np.cumsum(false_positives)
true_positives = np.cumsum(true_positives)
# compute recall and precision (Y el F1)
recall = true_positives / num_annotations # Es lo mismo que dividir entre TP + FN porque la suma de ambas tiene que ser el número de anotaciones (se detecten o no)
precision = true_positives / np.maximum(true_positives + false_positives, np.finfo(np.float64).eps)
f1 = 2 * (precision * recall) / (precision + recall)
# compute average precision
average_precision = compute_ap(recall, precision)
average_precisions.append({'label': generator.labels[label], 'AP': average_precision, 'recall': recall[-1] if len(recall) else -1, 'precision': precision[-1] if len(precision) else -1, 'support': num_annotations, 'TP':true_positives[-1] if len(true_positives) else -1, 'FP': false_positives[-1] if len(false_positives) else -1, 'FN': annotations_pending})
return average_precisions
###Output
_____no_output_____
###Markdown
Evaluación Carga de modelo y de datos de test
###Code
os.chdir('/home/ubuntu/tfm')
config_path = './utils/config.json'
with open(config_path) as config_buffer:
config = json.loads(config_buffer.read())
instances = pckl.load(open(config['model']['dataset_folder'], 'rb'))
labels = config['model']['labels']
labels = sorted(labels)
valid_generator = BatchGenerator(
instances = instances,
anchors = config['model']['anchors'],
labels = sorted(config['model']['labels']),
)
infer_model = load_model(config['train']['model_folder'], config['train']['classes_path'], config['train']['anchors_path'])
###Output
_____no_output_____
###Markdown
Test
###Code
all_detections, all_annotations = detection(infer_model, valid_generator)
average_precisions = evaluation(all_detections, all_annotations, valid_generator)
###Output
_____no_output_____
###Markdown
Procesar salida
###Code
items = 0
precision = 0
for average_precision in average_precisions:
items += 1
precision += average_precision['AP']
display(pd.DataFrame(average_precisions))
print('mAP: {:.4f}'.format(precision / items))
###Output
_____no_output_____
###Markdown
Prueba completa
###Code
import mlflow
import os
import shutil
import boto3
from datetime import datetime
S3_CLIENT = boto3.resource('s3')
mlflow.set_tracking_uri(os.getenv('MLFLOW_TRACKING_URI'))
MLFLOW_CLIENT = mlflow.tracking.MlflowClient()
REGISTERED_MODELS = ["Hands"]
MODELS = {}
def downlod_model(bucket_name, remoteDirectory_name):
bucket = S3_CLIENT.Bucket(bucket_name)
for obj in bucket.objects.filter(Prefix=remoteDirectory_name):
if not os.path.exists(os.path.dirname(obj.key)):
os.makedirs(os.path.dirname(obj.key))
bucket.download_file(obj.key, obj.key)
def update_models(version=-1, remove_old_versions=True):
update = {}
for model_name in REGISTERED_MODELS:
model = None
update[model_name] = 0
for mv in MLFLOW_CLIENT.search_model_versions(f"name='{model_name}'"):
mv_bckp = mv
mv = dict(mv)
if version == mv['version'] or (version == -1 and mv['current_stage'] == 'Production'):
mv['last_updated_timestamp'] = str(datetime.fromtimestamp(int(mv['last_updated_timestamp'] / 1000)))
bucket = mv['source'].split('//')[1].split('/')[0]
folder = mv['source'].split('//')[1].split('/')[1]
if os.path.exists(os.path.join('./models', folder)):
print("Load existing model...")
model = os.path.join(os.path.join('./models', folder), "artifacts/model/data/model.h5")
else:
print("Downloading model...")
downlod_model(bucket, folder)
model = os.path.join(os.path.join('./models', folder), "artifacts/model/data/model.h5")
if remove_old_versions and os.path.exists('./models'):
shutil.rmtree('./models')
if not os.path.exists('./models'):
os.mkdir('./models')
shutil.move(os.path.join(os.getcwd(), folder), './models')
update[model_name] = 1
print("Using model {name} v{version} ({current_stage}) updated at {last_updated_timestamp}".format(**mv))
#response = {k: v for k, v in mv.items() if v}
break
if model:
MODELS[model_name] = (model, mv_bckp)
return update
def get_model(model_name):
return MODELS.get(model_name, None)
os.chdir('/home/ubuntu/tfm/standalone')
config_path = '../utils/config.json'
with open(config_path) as config_buffer:
config = json.loads(config_buffer.read())
instances = pckl.load(open(config['model']['dataset_folder'], 'rb'))
labels = config['model']['labels']
labels = sorted(labels)
valid_generator = BatchGenerator(
instances = instances,
anchors = config['model']['anchors'],
labels = sorted(config['model']['labels']),
)
versions = range(13,22)
for version in tqdm(versions):
update_models(version)
model_path, model_meta = get_model('Hands')
infer_model = load_model(model_path, config['train']['classes_path'], config['train']['anchors_path'])
all_detections, all_annotations = detection(infer_model, valid_generator)
for iou in [0.6,0.7,0.8,0.9]:
average_precisions = evaluation(all_detections, all_annotations, valid_generator, iou_threshold=iou)
items = 0
precision = 0
for average_precision in average_precisions:
items += 1
precision += average_precision['AP']
pckl.dump(((version,MLFLOW_CLIENT.get_run(model_meta.run_id)),(all_detections, all_annotations), (pd.DataFrame(average_precisions), 'mAP: {:.4f}'.format(precision / items))), open(f"{version}_{iou}_.pckl", 'wb'))
###Output
0%| | 0/1 [00:00<?, ?it/s]
###Markdown
Generar CSV prueba completa (Grano grueso)
###Code
os.chdir('/home/ubuntu/tfm/utils/results')
rows = []
m = {'.pckl':'all', '_day.pckl': 'day', '_night.pckl': 'night'}
for s in ['.pckl', '_day.pckl', '_night.pckl']:
for day in range(13, 22):
data = pckl.load(open(str(day)+s, 'rb'))
row ={
'version': data[0][0],
'mlflow': data[0][1],
'result': data[2][0],
'mAP': data[2][1].replace('mAP: ', '')
}
row['batch_size'] = row['mlflow'].data.params['batch_size']
row['augmentation'] = row['mlflow'].data.params['augmentation']
row['learning_date'] = float(row['mlflow'].data.params['learning_rate'])
row['kind'] = m[s]
del row['mlflow']
rows.append(row)
final = []
for row in rows:
first = f"{row['version']},{row['augmentation']},{row['batch_size']},{row['learning_date']},{row['kind']},{row['mAP']}"
partial = []
for _,r in row['result'].transpose().items():
partial.append(','.join(list(map(lambda x:str(x), r))))
#print(','.join(r[0]))
second = ','.join(partial)
final.append(','.join([first, second]))
for f in final:
print(f)
###Output
13,False,4,0.0001,all,0.3812,Dedo,0.3829324831117263,0.43820224719101125,0.8082901554404145,356.0,156.0,37.0,200.0,Panel,0.3793993233311576,0.41551724137931034,0.8397212543554007,580.0,241.0,46.0,339.0
14,True,4,0.0001,all,0.7297,Dedo,0.8303311240100032,0.8932584269662921,0.895774647887324,356.0,318.0,37.0,38.0,Panel,0.6290790745162128,0.6741379310344827,0.8650442477876106,580.0,391.0,61.0,189.0
15,True,4,1e-06,all,0.0000,Dedo,0.0,-1,-1,356.0,-1,-1,356.0,Panel,0.0,-1,-1,580.0,-1,-1,580.0
16,True,4,1e-05,all,0.4879,Dedo,0.4488293183655009,0.5674157303370787,0.6917808219178082,356.0,202.0,90.0,154.0,Panel,0.5269231012964508,0.5775862068965517,0.7023060796645703,580.0,335.0,142.0,245.0
17,True,4,0.001,all,0.0135,Dedo,0.0,-1.0,-1.0,356.0,-1.0,-1.0,356.0,Panel,0.027079107505070994,0.027586206896551724,0.8888888888888888,580.0,16.0,2.0,564.0
18,True,8,0.0001,all,0.7246,Dedo,0.7902599471821179,0.851123595505618,0.8583569405099151,356.0,303.0,50.0,53.0,Panel,0.6589819074122285,0.6844827586206896,0.8649237472766884,580.0,397.0,62.0,183.0
19,True,8,0.0001,all,0.7641,Dedo,0.8625631228693236,0.9073033707865169,0.9047619047619048,356.0,323.0,34.0,33.0,Panel,0.6656989325448879,0.6931034482758621,0.8701298701298701,580.0,402.0,60.0,178.0
20,True,16,0.0001,all,0.7875,Dedo,0.9081660195236063,0.9353932584269663,0.9380281690140845,356.0,333.0,22.0,23.0,Panel,0.6669200313414714,0.696551724137931,0.8879120879120879,580.0,404.0,51.0,176.0
21,True,16,0.0001,all,0.7918,Dedo,0.9006088066113347,0.9353932584269663,0.9327731092436975,356.0,333.0,24.0,23.0,Panel,0.6830164550350435,0.7017241379310345,0.8696581196581197,580.0,407.0,61.0,173.0
13,False,4,0.0001,day,0.3819,Dedo,0.41563725718589495,0.483695652173913,0.7876106194690266,184.0,89.0,24.0,95.0,Panel,0.3481046371892026,0.38848920863309355,0.8120300751879699,278.0,108.0,25.0,170.0
14,True,4,0.0001,day,0.7470,Dedo,0.8842274965053167,0.9293478260869565,0.9243243243243243,184.0,171.0,14.0,13.0,Panel,0.6097987483612862,0.6906474820143885,0.8311688311688312,278.0,192.0,39.0,86.0
15,True,4,1e-06,day,0.0000,Dedo,0.0,-1,-1,184.0,-1,-1,184.0,Panel,0.0,-1,-1,278.0,-1,-1,278.0
16,True,4,1e-05,day,0.4798,Dedo,0.42201367427774894,0.5271739130434783,0.6830985915492958,184.0,97.0,45.0,87.0,Panel,0.5376378207322337,0.5827338129496403,0.8526315789473684,278.0,162.0,28.0,116.0
17,True,4,0.001,day,0.0126,Dedo,0.0,-1.0,-1.0,184.0,-1.0,-1.0,184.0,Panel,0.025179856115107913,0.025179856115107913,0.875,278.0,7.0,1.0,271.0
18,True,8,0.0001,day,0.7247,Dedo,0.8000656240037809,0.8478260869565217,0.8571428571428571,184.0,156.0,26.0,28.0,Panel,0.6492858382663367,0.6870503597122302,0.8232758620689655,278.0,191.0,41.0,87.0
19,True,8,0.0001,day,0.7858,Dedo,0.8933944739127337,0.9293478260869565,0.9193548387096774,184.0,171.0,15.0,13.0,Panel,0.6782089532741622,0.7122302158273381,0.8497854077253219,278.0,198.0,35.0,80.0
20,True,16,0.0001,day,0.7855,Dedo,0.9343465370628992,0.9510869565217391,0.9615384615384616,184.0,175.0,7.0,9.0,Panel,0.6365895187244122,0.6942446043165468,0.8427947598253275,278.0,193.0,36.0,85.0
21,True,16,0.0001,day,0.8236,Dedo,0.9394373149373711,0.9565217391304348,0.9513513513513514,184.0,176.0,9.0,8.0,Panel,0.7077840368947863,0.7302158273381295,0.8458333333333333,278.0,203.0,37.0,75.0
13,False,4,0.0001,night,0.3822,Dedo,0.3521050333585586,0.38953488372093026,0.8375,172.0,67.0,13.0,105.0,Panel,0.4122275886788246,0.44039735099337746,0.8636363636363636,302.0,133.0,21.0,169.0
14,True,4,0.0001,night,0.7180,Dedo,0.7862235592163782,0.8546511627906976,0.8647058823529412,172.0,147.0,23.0,25.0,Panel,0.6496922478574753,0.6589403973509934,0.9004524886877828,302.0,199.0,22.0,103.0
15,True,4,1e-06,night,0.0000,Dedo,0.0,-1,-1,172.0,-1,-1,172.0,Panel,0.0,-1,-1,302.0,-1,-1,302.0
16,True,4,1e-05,night,0.5053,Dedo,0.4890914241266462,0.6104651162790697,0.7,172.0,105.0,45.0,67.0,Panel,0.5214814506417649,0.5728476821192053,0.6027874564459931,302.0,173.0,114.0,129.0
17,True,4,0.001,night,0.0144,Dedo,0.0,-1.0,-1.0,172.0,-1.0,-1.0,172.0,Panel,0.02880794701986755,0.029801324503311258,0.9,302.0,9.0,1.0,293.0
18,True,8,0.0001,night,0.7294,Dedo,0.790966428529403,0.8546511627906976,0.8596491228070176,172.0,147.0,24.0,25.0,Panel,0.6678372515902706,0.6821192052980133,0.9074889867841409,302.0,206.0,21.0,96.0
19,True,8,0.0001,night,0.7477,Dedo,0.8404657668608715,0.8837209302325582,0.8888888888888888,172.0,152.0,19.0,20.0,Panel,0.6548629906637896,0.6754966887417219,0.8908296943231441,302.0,204.0,25.0,98.0
20,True,16,0.0001,night,0.7886,Dedo,0.8851267834821719,0.9186046511627907,0.9132947976878613,172.0,158.0,15.0,14.0,Panel,0.6921107064771517,0.6986754966887417,0.9336283185840708,302.0,211.0,15.0,91.0
21,True,16,0.0001,night,0.7612,Dedo,0.8627183627965802,0.9127906976744186,0.9127906976744186,172.0,157.0,15.0,15.0,Panel,0.6597352011399578,0.6754966887417219,0.8947368421052632,302.0,204.0,24.0,98.0
###Markdown
Generar CSV prueba completa (Grano fino)
###Code
os.chdir('/home/ubuntu/tfm/utils/results_2')
rows = []
for iou in ['0.6','0.7','0.8','0.9']:
for day in ['14', '19', '21']:
data = pckl.load(open(f"{day}_{iou}_.pckl", 'rb'))
row ={
'version': data[0][0],
'mlflow': data[0][1],
'result': data[2][0],
'mAP': data[2][1].replace('mAP: ', '')
}
row['batch_size'] = row['mlflow'].data.params['batch_size']
row['augmentation'] = row['mlflow'].data.params['augmentation']
row['learning_date'] = float(row['mlflow'].data.params['learning_rate'])
row['kind'] = iou
del row['mlflow']
rows.append(row)
final = []
for row in rows:
first = f"{row['version']},{row['augmentation']},{row['batch_size']},{row['learning_date']},{row['kind']},{row['mAP']}"
partial = []
for _,r in row['result'].transpose().items():
partial.append(','.join(list(map(lambda x:str(x), r))))
#print(','.join(r[0]))
second = ','.join(partial)
final.append(','.join([first, second]))
for f in final:
print(f)
###Output
14,True,4,0.0001,0.6,0.4855,Dedo,0.45096235137250285,0.6544943820224719,0.6563380281690141,356.0,233.0,122.0,123.0,Panel,0.5199668562601483,0.6051724137931035,0.7765486725663717,580.0,351.0,101.0,229.0
19,True,8,0.0001,0.6,0.5322,Dedo,0.4851024521853131,0.6657303370786517,0.6638655462184874,356.0,237.0,120.0,119.0,Panel,0.579343372616983,0.6379310344827587,0.8008658008658008,580.0,370.0,92.0,210.0
21,True,16,0.0001,0.6,0.5567,Dedo,0.5053196279647556,0.6882022471910112,0.6862745098039216,356.0,245.0,112.0,111.0,Panel,0.6080660745142735,0.65,0.8055555555555556,580.0,377.0,91.0,203.0
14,True,4,0.0001,0.7,0.2222,Dedo,0.091532775947198,0.2893258426966292,0.29014084507042254,356.0,103.0,252.0,253.0,Panel,0.3529251779264716,0.4810344827586207,0.6172566371681416,580.0,279.0,173.0,301.0
19,True,8,0.0001,0.7,0.2542,Dedo,0.11800129830465975,0.3258426966292135,0.32492997198879553,356.0,116.0,241.0,240.0,Panel,0.39049539394147187,0.496551724137931,0.6233766233766234,580.0,288.0,174.0,292.0
21,True,16,0.0001,0.7,0.2940,Dedo,0.1230485857019168,0.33146067415730335,0.33053221288515405,356.0,118.0,239.0,238.0,Panel,0.46504490531161136,0.5551724137931034,0.688034188034188,580.0,322.0,146.0,258.0
14,True,4,0.0001,0.8,0.0725,Dedo,0.005779541216643248,0.06179775280898876,0.061971830985915494,356.0,22.0,333.0,334.0,Panel,0.1392992140939272,0.2827586206896552,0.36283185840707965,580.0,164.0,288.0,416.0
19,True,8,0.0001,0.8,0.0665,Dedo,0.0028633524801929414,0.05056179775280899,0.05042016806722689,356.0,18.0,339.0,338.0,Panel,0.1302188706115665,0.24655172413793103,0.30952380952380953,580.0,143.0,319.0,437.0
21,True,16,0.0001,0.8,0.0844,Dedo,0.006899183144387642,0.0702247191011236,0.0700280112044818,356.0,25.0,332.0,331.0,Panel,0.16200070820034126,0.28793103448275864,0.35683760683760685,580.0,167.0,301.0,413.0
14,True,4,0.0001,0.9,0.0065,Dedo,0.0,0.0,0.0,356.0,0.0,355.0,356.0,Panel,0.013027715511613613,0.07586206896551724,0.09734513274336283,580.0,44.0,408.0,536.0
19,True,8,0.0001,0.9,0.0021,Dedo,8.385041086701324e-06,0.0028089887640449437,0.0028011204481792717,356.0,1.0,356.0,355.0,Panel,0.004176032111645243,0.03620689655172414,0.045454545454545456,580.0,21.0,441.0,559.0
21,True,16,0.0001,0.9,0.0056,Dedo,4.847922321769668e-05,0.0056179775280898875,0.0056022408963585435,356.0,2.0,355.0,354.0,Panel,0.011076257746207525,0.0603448275862069,0.07478632478632478,580.0,35.0,433.0,545.0
###Markdown
Versión 21 en detalle
###Code
os.chdir('/home/ubuntu/tfm/utils/results')
data_0_5 = pckl.load(open(f"21.pckl", 'rb'))
os.chdir('/home/ubuntu/tfm/utils/results_2')
data_0_6 = pckl.load(open(f"21_0.6_.pckl", 'rb'))
os.chdir('/home/ubuntu/tfm/utils/results_2')
data_0_7 = pckl.load(open(f"21_0.7_.pckl", 'rb'))
os.chdir('/home/ubuntu/tfm/utils/results_2')
data_0_8 = pckl.load(open(f"21_0.8_.pckl", 'rb'))
os.chdir('/home/ubuntu/tfm/utils/results_2')
data_0_9 = pckl.load(open(f"21_0.9_.pckl", 'rb'))
data_0_5[2][0][['label', 'AP', 'recall', 'precision', 'support', 'TP', 'FP']]
data_0_5[2][0].columns
data_0_5[2][1]
data_0_6[2][0][['label', 'AP', 'recall', 'precision', 'support', 'TP', 'FP']]
data_0_6[2][1]
data_0_7[2][0][['label', 'AP', 'recall', 'precision', 'support', 'TP', 'FP']]
data_0_7[2][1]
data_0_8[2][0][['label', 'AP', 'recall', 'precision', 'support', 'TP', 'FP']]
data_0_8[2][1]
data_0_9[2][0][['label', 'AP', 'recall', 'precision', 'support', 'TP', 'FP']]
data_0_9[2][1]
###Output
/home/ubuntu/miniconda3/envs/tfm/lib/python3.7/site-packages/ipykernel/ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
###Markdown
Plot R/R and FN/FP for each frame
###Code
snap_time=10
fig, ax1 = plt.subplots()
ax1.set_xlabel('Frame ID')
ax1.set_ylabel('Precision / Recall', color='g')
ax1.plot(eval_det['frame_id'], eval_det['precision'], color='r')
ax1.plot(eval_det['frame_id'], eval_det['recall'], color='g')
ax1.set_ylim([0, 1])
plt.legend(['precision', 'recall'], loc='upper left')
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
ax2.set_ylabel('FN / FP', color='b') # we already handled the x-label with ax1
ax2.plot(eval_det['frame_id'], eval_det['fn'], color='c')
ax2.plot(eval_det['frame_id'], eval_det['fp'], color='b')
ax2.set_ylim([0, 22])
ax2.plot(eval_det['frame_id'], eval_det['num_object_gt'], color='k')
ax2.tick_params(axis='y', labelcolor='b')
plt.legend(['FN', 'FP', 'Total Helmets'], loc='lower right')
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.axvline(x=snap_time, color='k', linestyle='--')
# plt.show()
plt.savefig('/home/ec2-user/SageMaker/0Artifact/helmet_detection/output/pr_fnfp.png')
###Output
_____no_output_____
###Markdown
Plot F1 score and FN/FP for each frame
###Code
fig, ax1 = plt.subplots()
ax1.set_xlabel('Frame ID')
ax1.set_ylabel('F1 score', color='g')
ax1.plot(eval_det['frame_id'], eval_det['f1_score'], color='r')
ax1.set_ylim([0, 1])
plt.legend(['F1 score'], loc='upper left')
# ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
ax2.set_ylabel('FN / FP', color='b') # we already handled the x-label with ax1
ax2.plot(eval_det['frame_id'], eval_det['fn'], color='c')
ax2.plot(eval_det['frame_id'], eval_det['fp'], color='b')
ax2.set_ylim([0, 22])
ax2.plot(eval_det['frame_id'], eval_det['num_object_gt'], color='k')
ax2.tick_params(axis='y', labelcolor='b')
plt.legend(['FN', 'FP', 'Total Helmets'], loc='lower right')
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.axvline(x=snap_time, color='k', linestyle='--')# plt.show()
plt.savefig('/home/ec2-user/SageMaker/0Artifact/helmet_detection/output/f1_fnfp.png')
###Output
_____no_output_____
###Markdown
Plot stacked bar for tp, fn and fp for each frame
###Code
# pal = ["#9b59b6", "#e74c3c", "#34495e", "#2ecc71"]
pal = ["g","r","b"]
plt.figure(figsize=(12,8))
plt.stackplot(eval_det['frame_id'], eval_det['tp'], eval_det['fn'], eval_det['fp'],
labels=['TP','FN','FP'], colors=pal)
plt.plot(eval_det['frame_id'], eval_det['num_object_gt'], color='k', linewidth=6, label='Total Helmets')
plt.legend(loc='best', fontsize=12)
plt.xlabel('Frame ID', fontsize=12)
plt.ylabel(' # of TPs, FNs, FPs', fontsize=12)
plt.axvline(x=snap_time, color='k', linestyle='--')
plt.savefig('/home/ec2-user/SageMaker/0Artifact/helmet_detection/output/stacked.png')
detections = ObjectDetector.run_detection_video(video_in, model_path, full_video,subset_video, conf_thres)
vid_title = "/home/ec2-user/SageMaker/helmet_detection/input/" + os.path.splitext(os.path.basename(video_in))[0] + '.csv'
print(vid_title)
detections.to_csv(vid_title, index=None)
!ls /home/ec2-user/SageMaker/0Artifact/helmet_detection/input/train_labels.csv
# !mkdir src/helmet_detection_metric/detections
# !mkdir src/helmet_detection_metric/groundtruths
# !mkdir src/helmet_detection_metric/results
!python src/helmet_detection_metric/object_detection_metrics.py '/home/ec2-user/SageMaker/0Artifact/helmet_detection/input/train/57583_000082_Endzone.mp4' True 0 4000
!python src/helmet_detection_metric/pascalvoc.py
###Output
_____no_output_____
###Markdown
**Installing libraries that are necessary to run this notebook on google colaboratory** In this work, we take a closer look into the result of the model with the best validation metrics. We load the data and the checkpoint of the model from google drive.
###Code
!pip install -q transformers
!pip install sentencepiece
!pip install langdetect
import transformers
import torch
from torch.utils.data import DataLoader
from torch import cuda
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from sklearn.model_selection import train_test_split
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import os
import pandas as pd
from langdetect import detect
###Output
_____no_output_____
###Markdown
We mount the drive to google colaboratory. If you want to run the notebook locally, you need to have a folder *my_folder* that contains a folder *Data* with a train.csv file. In this case you only need to be sure that you are in the folder *my_folder* by running the commented command and deleted the rest of the code in the next cell.
###Code
#os.chdir('path/my_folder')
from google.colab import drive
drive.mount('/content/drive')
os.chdir('drive/MyDrive/Synthesio')
###Output
Mounted at /content/drive
###Markdown
Reading and preparing the data
###Code
df = pd.read_csv('Data/train.csv')
df = df[df['sentiment'] != 'unassigned']
categories_encoding = {'negative': 0, 'neutral':1, 'positive':2 }
df['sentiment'] = df['sentiment'].replace(categories_encoding)
###Output
_____no_output_____
###Markdown
Importing and loading the best model
###Code
device = 'cuda' if torch.cuda.is_available() else 'cpu'
MODEL = f"cardiffnlp/twitter-xlm-roberta-base-sentiment"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.load_state_dict(torch.load('best_model_xlmr_tweets_fine_tuned.pth',
map_location=torch.device(device)))
model.to(device)
###Output
_____no_output_____
###Markdown
Creating validation dataloader
###Code
def create_dataset(data, tokenizer, max_length=512):
X_train, X_val, Y_train, Y_val = train_test_split(list(data['content']),
data['sentiment'],
test_size=0.2,
random_state=42)
Y_train.index = range(len(Y_train))
Y_val.index = range(len(Y_val))
train_encodings = tokenizer(X_train,
truncation=True,
padding=True,
max_length=max_length)
val_encodings = tokenizer(X_val,
truncation=True,
padding=True,
max_length=max_length)
return train_encodings, np.array(Y_train), val_encodings, np.array(Y_val)
class Sentiment_analysis_dataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key:torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
test_params = {'batch_size': 4,
'shuffle': False,
'num_workers': 0
}
train, y_train, val, y_val = create_dataset(df, tokenizer)
val_dataset = Sentiment_analysis_dataset(val, y_val)
val_loader = DataLoader(val_dataset, **test_params)
###Output
_____no_output_____
###Markdown
Generating output on the validation dataset
###Code
final_outputs=[]
softmax = torch.nn.Softmax(dim=1)
for batch in val_loader:
input_ids = batch['input_ids'].to(device, dtype = torch.long)
attention_mask = batch['attention_mask'].to(device, dtype = torch.long)
labels = batch['labels'].to(device, dtype = torch.long)
outputs = model(input_ids, attention_mask)
final_outputs.extend(softmax(outputs.logits).cpu().detach().numpy().tolist())
###Output
_____no_output_____
###Markdown
**Confusion matrix** We dig into the confusion between the different class by looking into the confusion matrix that we normalize according to the true labels.
###Code
y_pred = np.argmax(final_outputs, axis=1)
fig, axes = plt.subplots( figsize=(12, 4))
conf_mat = metrics.confusion_matrix(y_val, y_pred, normalize='true')
sns.heatmap(conf_mat, annot=True, cmap="Blues",
xticklabels =['negative', 'neutral', 'positive'],
yticklabels = ['negative', 'neutral', 'positive'], ax=axes)
plt.ylabel('True', fontsize=20)
plt.xlabel('Predicted' ,fontsize=20)
axes.set_title("Confusion matrix of the model 'xlm_tweets fine-tuned'",
size=10)
# confusion matrix normalize % true
###Output
_____no_output_____
###Markdown
* The model more error by attributing negative labels to positive examples and positive label to negative examples that by confusing the neural examples with the non-neutral ones.--> Next we will take a look into some examples of this confusion **Analysing errors** In this part, we will be looking deeper into the examples that were misclassified and specially the confusion between negative and positive examples. I looked at different examples and I noticed that the miss-classification is due to three reasons:1. annotation errors2. confusing contents where one part of the text express positive sentiment and the rest express the contrary.3. misunderstanding the text: the model misclassify the text because it doesn't "understand" it
###Code
_, val_text, _, _ = train_test_split(list(df['content']),
df['sentiment'],
test_size=0.2,
random_state=42)
true_pos_pred_neg = np.array(val_text)[np.where((y_pred==0) & (y_val==2))]
true_neg_pred_pos = np.array(val_text)[np.where((y_pred==2) & (y_val==0))]
###Output
_____no_output_____
###Markdown
Here are some examples of **positive examples that were classified as negatives**: ---1. **Annotation errors**:---* true_pos_pred_neg[190]: "*sy kecewa krn barang nya tdk sebagus yg ada d gambar jahitan nya jg jelek bgt*" -> Traduction: "*I'm disappointed because the item is not as good as the one in the picture, the stitching is also very bad*"* true_pos_pred_neg[198]: "*Trop de fautes d'orthographe inacceptables malgré un suspense certain. Thriller bien ficelé. Une médium attachante, 2 copines courageuses, des flics sympas, et les 2 immondes à la limite personnages de BD. Ça se lit vite ( vite ne prend jamais de S, mais tapis en a 1.)*"---2. **Confusing contents**---* true_pos_pred_neg[10]: "*جميل لكن اللون باهت نوعا ما*" -> Traduction: "*Beautiful but the color is a bit faded*" --> The confusion comes from the fact that the text contains two opinions the first is positive and the second is negative, but in this case we can say that the overall sentiment is negative since no one is interesting in buying a product after seeing this comment.---3. **Misunderstanding** ---* true_pos_pred_neg[131]: *'rien a signalé objet comforme a la description'* --> the text is obviously positive, but it is classified as negative. This misclassification may be caused by the miss understanding of the word *signalé* which was misspelled.* true_pos_pred_neg[165]: *'جامدة موت'* and true_pos_pred_neg[118]:فشخ --> those two examples are written in the Egyptian dialect which is not a language used in the pre-training of the language model so the model handle them as Arabic text. However, these two expressions don't have the same meaning in Arabic and in the Egyptian dialect. While they mean respectively *"rigid death"* and "break up" in Arabic, they are used in the Egyptian dialect to say "very good". Here are some examples of **negative examples that were classified as positives**: ---1. **Annotation errors:** ---* true_neg_pred_pos[3]: "*I took in 52 hardbound books by James Patterson and Lee Child that were in very good to excellent condition and was paid 27 cents per book. Yes, 27 cents per book.*" --> which is clearly a positive example* true_neg_pred_pos[59]: *"اشكركم على هذا البرنامج جدا مفيد ؟"* --> Traduction: Thank you for this very useful program?---2. **Confusing contents**---* tue_neg_pred_pos[90]: "*Très bonne odeur mais n hydrate pas beaucoup. Aide à démêler mais c est tout"* --> This text contains an expression that encodes the positive idea of a good smell and another expression that encodes a negative one of a product that doesn't hydrate.* tue_neg_pred_pos[19]: "*The game looks aswsome and is probably fun to play,but at what cost? I downloaded This on my kindle,only to find out that it takes up over that majority of my kindle space! Buying this app is your decision to make,but keep in mind you'll suffer the loss of a lot of space. Hope this review helped!*"---3. **Miss understanding**---* tue_neg_pred_pos[70]! *"Duper"** tue_neg_pred_pos[122]: *"j'ai commandé 3 multiprises à interrupteur, aucun interrupteur ne fonctionne... aucun des 3 ! c'est juste incroyable.... les prises fonctionnent bien par ailleurs... mais sans les lumières sur les inter."* --> this example is obvously a negative one, but was classified as a positive. The presence of the word "incroyable" maybe the cause of this misclassification.--- **Performances per language** Next we will take a look into the performances of the model per language to see if it performs better on certain languages than on others. We start by plotting bars that represent the accuracy of the model on each language when evaluated on the evaluation dataset. As we can see in the next plot that the accuracy varies heavily.By intuition we can say that the models will perform better with languages that are frequent in the training data. That's why we plot next to the accuracy bars, the number of occurrences of each language in the training set. We normalize this number while dividing it by the number of occurrences of the most frequent language in the train dataset.We only plot the 20 most frequent languages for visual clarity.
###Code
def detect_language(x):
try :
return detect(x)
except :
return 'Others'
languages = df['content'].apply(detect_language)
train_languages, val_languages, _, _ = train_test_split(languages,
df['sentiment'],
test_size=0.2,
random_state=42)
lang_in_train = pd.Series(train_languages).value_counts()
results_per_language = pd.DataFrame([val_languages.values,
(np.array(y_pred) == np.array(y_val)).tolist()])\
.T.rename(columns={0:'language', 1:"result"})\
.groupby('language').agg({'result':['sum','count']})
results_per_language['accuracy_per_lan'] = results_per_language[('result', 'sum')]/results_per_language[('result', 'count')]
results_per_language = results_per_language.join(lang_in_train)
results_per_language = results_per_language.rename(columns={('accuracy_per_lan', ''):"accuracy", 'content': 'nb_occurrence_in_train'}).drop([('result', 'sum'), ('result', 'count')], axis =1)
x_axis = np.arange(len(results_per_language))[1:20]
y = results_per_language['accuracy'].values[1:20]
z = results_per_language['nb_occurrence_in_train'].values[1:20]/max(results_per_language['nb_occurrence_in_train'])
fig, ax = plt.subplots(figsize=(8,4))
ax.bar(x_axis-0.2, y, width=0.2, color='b', align='center', label ='accuracy')
ax.bar(x_axis, z, width=0.2, color='g', align='center', label ='nb occurrence in the train normalized over the max')
plt.xticks(x_axis, results_per_language.index[1:20])
plt.title('Fig1: The accuracy by language evaluated on the validation set and the number of occurences of each language in the train set')
ax.legend(loc='upper right')
plt.show()
from scipy.stats import pearsonr
corr, _ = pearsonr(results_per_language['accuracy'].values, results_per_language['nb_occurrence_in_train'])
print("The correlation between the language's number of occurrence and the its accuracy is {}".format(corr))
###Output
The correlation between the language's number of occurrence and the its accuracy is 0.2470160287416774
###Markdown
Imports
###Code
!pip install transformers
!pip install tensorflow
!pip install torch
!pip install tweet-preprocessor
!pip install bs4
!pip install sentencepiece
!pip install langdetect
!pip install translate-api
!pip install aspect-based-sentiment-analysis
import tensorflow as tf
import pandas as pd
import preprocessor as p
from bs4 import BeautifulSoup
import re
import time
from transformers import DistilBertTokenizer,TFDistilBertForSequenceClassification
from langdetect import detect
import aspect_based_sentiment_analysis as absa
import translators as ts
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Test Dataset Preprocessing
###Code
t0 = time.time()
df=pd.read_excel("/content/drive/MyDrive/H2_B2I_14/EvaluationDatasets/evaluation_data.xlsx")
id=list(df['Text_ID'])
text=list(df['Text'])
#Basic Preprocessing
def remove_html(word):
soup = BeautifulSoup(word, 'lxml')
html_free = soup.get_text()
return html_free
def remove_urls(word):
url_pattern = re.compile(r'https?:\/\/.*[\r\n]*')
return url_pattern.sub(r'', word)
punc = '''!()-[]{};:'"\, <>./?@#$%^&*_~'''
feature_1=[]
for i,txt in enumerate(text):
if('tweet' in id[i] and len(p.clean('txt'))!=1):
m=p.clean(txt)
m=m[len('QT '):] if m.startswith('QT ') else m
for y in punc:
m=m[len(y):] if m.startswith(y) else m
m.strip()
feature_1.append(m)
else:
temp=remove_html(txt)
temp=remove_urls(temp)
idx=temp.find('\n')
subw=temp[:idx]
if(len(subw)<50 or len(subw.split(" "))<7):
feature_1.append(temp[idx:])
else:
feature_1.append(temp)
feature_2=[re.sub('\n+','',t) for t in feature_1]
#Translation
feature_final=[]
count=0
for a in feature_2:
if(len(a)>5000):
a=a[:4999]
if(a!=''):
feature_final.append(ts.google(str(a)))
time.sleep(0.5)
else:
feature_final.append('')
print(f"Example number {count}")
count+=1
t1=time.time()
print(f"Preprocessing Time = {t1-t0} seconds")
###Output
_____no_output_____
###Markdown
Task 1: Sentiment Classification
###Code
t2=time.time()
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model= TFDistilBertForSequenceClassification.from_pretrained("/content/drive/MyDrive/H2_B2I_14/DistilBERT")
predicted_labels=[]
counter=0
for y in feature_final:
if(type(y) is str):
predict_input = tokenizer.encode(y,
truncation=True,
padding=True,
return_tensors="tf",max_length=512)
tf_output = model.predict(predict_input)[0]
tf_prediction = tf.nn.softmax(tf_output, axis=1).numpy()[0]
predicted_labels.append(int(tf.argmax(tf_prediction)))
else:
predicted_labels.append(0)
print(f"Example {counter}")
counter+=1
t3=time.time()
print(f"Evaluation Time for Task 1 is {t3-t2} seconds")
op1=pd.DataFrame(list(zip(id,predicted_labels)),columns=['Text_ID','Mobile_Tech_Flag_Predicted'])
op1.to_csv("/content/drive/MyDrive/H2_B2I_14/Outputs/Output1.csv",index=False)
op1.to_csv("/content/drive/MyDrive/H2_B2I_14/Outputs/Output2.csv",index=False)
df2=pd.DataFrame(list(zip(feature_label)),columns=['Text'])
df2.to_csv("/content/drive/MyDrive/H2_B2I_14/Preprocessed_Text.csv")
###Output
_____no_output_____
###Markdown
Task 2: Entity Level Sentiment Analysis
###Code
t4=time.time()
mobile_companies = ['acer','alcatel','amoi','apple','archos','asus','at&t','benefon','blackberry','blackview','blu','bq','celkon','chea','coolpad','energizer','ericsson','eten','fairphone','gionee','google','honor','hp','htc','huawei','i-mate','i-mobile','icemobile','infinix','innostream','intex','jolla','karbonn','kyocera','lava','leeco','lenovo','lg','maxon','maxwest','miezu','micromax','microsoft','mitac','modu','motorola','neonode','niu','nokia','o2','oneplus','oppo','panasonic','qmobile','qtek','razor','realme','sagem','samsung','sendo','sewon','sharp','sonim','sony','sony-ericsson','spice','t-mobile','tcl','tecno','tel.me.','telit','thuraya','toshiba','ulefone','vertu','verykool','vivo','vk mobile','vodafone','wiko','wnd','xcute','xiaomi','xolo','yota','yu','zte']
nlp=absa.load()
brand_found=[]
sentiment=[]
for i,x in enumerate(feature_final):
temp=[]
temp1=[]
if(predicted_labels[i]==1):
x=' '+x+' '
x = re.sub('([.,!?()])', r' \1 ', x)
x = re.sub('\s{2,}', ' ', x)
for y in mobile_companies:
if(' '+y+' ' in x.lower()):
idx=x.lower().find(y)
x0=str(x[max(idx-250,0):min(idx+250,len(x))]).lower()
temp.append(y)
sent=nlp((x0),aspects=[y])
if(int(sent.subtasks[y].examples[0].sentiment)==2):
temp1.append('postive')
elif(int(sent.subtasks[y].examples[0].sentiment)==1):
temp1.append('negative')
else:
temp1.append('neutral')
else:
pass
brand_found.append(temp)
sentiment.append(temp1)
print(i)
brand=list()
sent=list()
for i,x in enumerate(brand_found):
te=''
for y in x:
te+=y.capitalize()+','
brand.append(te[:-1])
for i,x in enumerate(sentiment):
te=''
for y in x:
te+=y.capitalize()+','
sent.append(te[:-1])
op1['Brands_Entity_Identified']=brand
op1['Sentiment_Identified']=sent
op1.to_csv("/content/drive/MyDrive/H2_B2I_14/Outputs/Output2.csv",index=False)
t5=time.time()
print(f"Evaluation Time for Task 2 is {t5-t4} seconds")
###Output
_____no_output_____
###Markdown
Task 3: Headline Generator
###Code
!pip install transformers==4.4.2
###Output
_____no_output_____
###Markdown
Please restart runtime at this point for installing the new version of Transformers
###Code
t6=time.time()
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
import pandas as pd
model = PegasusForConditionalGeneration.from_pretrained('/content/drive/MyDrive/H2_B2I_14/Pegasus')
tokenizer = PegasusTokenizer.from_pretrained('/content/drive/MyDrive/H2_B2I_14/Pegasus')
op2=pd.read_csv("/content/drive/MyDrive/H2_B2I_14/Outputs/Output1.csv")
id=op2['Text_ID']
predicted_labels=op2['Mobile_Tech_Flag_Predicted']
text=pd.read_csv("/content/drive/MyDrive/H2_B2I_14/Preprocessed_Text.csv")['Text'].tolist()
headlines_gen = []
a= 0
for c,y in enumerate(text):
if(('article' in id[c]) and predicted_labels[c]==1):
temp = tokenizer(y, return_tensors = 'pt',padding=True,truncation=True)
summ = model.generate(input_ids=temp['input_ids'],
attention_mask=temp['attention_mask'],
early_stopping=True)
pred = tokenizer.decode(summ[0], skip_special_tokens=True)
headlines_gen.append(pred)
else:
headlines_gen.append("")
a+=1
print(f"Example {a}")
op2['Headline_Generated_Eng_Lang']=headlines_gen
op2.to_csv("/content/drive/MyDrive/H2_B2I_14/Outputs/Output1.csv", index=False)
t7=time.time()
print(f"Evaluation Time for Task 3 is {t7-t6}")
print(f"Total Evaluation Time is {t1+t3+t5+t7-t6-t4-t2-t0}")
###Output
_____no_output_____
###Markdown
Test dataset
###Code
def read_rules(fpath):
rules = {}
with open(fpath, 'r', encoding='utf-8') as lines:
for line in lines:
fields = line.strip().split(' ')
lemma = fields[0]
forms = fields[1:]
rules[lemma] = set(forms)
return rules
rules = morfeusz2.load_dict('dicts/polimorf-20190818.tab.gz')
all_forms = defaultdict(set)
for lemma, forms in tqdm(rules.items(), desc='Stats for rules'):
for form in forms:
all_forms[form].add(lemma)
ambiguous_forms = [form for form in all_forms.keys() if len(all_forms[form])>1]
print('All cases: {}'.format(len(rules.keys())))
print('All forms: {}'.format(len(all_forms)))
print('Ambiguous forms: {} ({:.0%})'.format(len(ambiguous_forms), len(ambiguous_forms)/len(all_forms)))
###Output
Loading dict: 7381663it [00:39, 186553.81it/s]
Stats for rules: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 323681/323681 [00:10<00:00, 32216.15it/s]
###Markdown
Evaluation
###Code
def eval(stem, rules):
cases = rules.keys()
total_cases = len(cases)
total_words = 0
no_stem_found = 0
valid_lemma = 0
unique_stem = 0
start = time.time()
for lemma, forms in tqdm(rules.items(), desc='Stemming'):
stemmed_forms = set()
for form in forms:
stemmed = stem(form)
stemmed_forms.add(stemmed)
total_words += 1
if stemmed is None:
no_stem_found += 1
if len(stemmed_forms) == 1:
unique_stem += 1
stemmed = next(iter(stemmed_forms))
if stemmed == lemma:
valid_lemma += 1
stemming_time_sec = time.time() - start
result = {
'unique_stem':unique_stem,
'unique_stem_frac': unique_stem / total_cases,
'valid_lemma':valid_lemma,
'valid_lemma_frac':valid_lemma / unique_stem,
'no_stem_found': no_stem_found,
'no_stem_found_frac': no_stem_found / total_words,
'stemming_time_sec': stemming_time_sec,
'stemming_time_per_word' : stemming_time_sec / total_words,
'words_per_second' : total_words / stemming_time_sec
}
return result
stemming_tables = ['data/original/stemmer_20000.tbl.gz',
'data/polimorf/stemmer_polimorf.tbl.gz'
]
results = []
for stemming_table in stemming_tables:
start = time.time()
stemmer = StempelStemmer.from_file(stemming_table)
loading_time = time.time() - start
result = eval(stemmer.stem, rules)
result['loading_time_sec'] = loading_time
result['stemming_table'] = stemming_table
result['stemming_table_size_mb'] = os.stat(stemming_table).st_size / (1024*1024)
results.append(result)
results = pd.DataFrame.from_dict(results)
results.set_index('stemming_table', inplace=True)
results
results.plot.barh(y='unique_stem_frac', figsize =(15,5) );
results.plot.barh(y='valid_lemma_frac', figsize =(15,5) );
results.plot.barh(y='words_per_second', figsize =(15,5) );
results.plot.barh(y='loading_time_sec', figsize =(15,5) );
results.plot.barh(y='stemming_table_size_mb', figsize =(15,5) );
###Output
_____no_output_____
###Markdown
Evaluate Naturalness Survey
###Code
import numpy as np
import pandas as pd
import scipy.stats as st
import seaborn as sns
df = pd.read_csv("Batch_4365949_batch_results_2.csv")
###Output
_____no_output_____
###Markdown
Convert rating to numbers
###Code
label_to_num = {
"Excellent - Completely natural speech" : 5,
"Good - Mostly natural speech": 4,
"Fair - Equally natural and unnatural speech": 3,
"Poor - Mostly unnatural speech" : 2,
"Bad - Completely unnatural speech" : 1
}
df_copy = df.copy()
# Select only values that werent rejected
df_copy = df_copy[df_copy["AssignmentStatus"] == "Submitted"]
df_copy["naturalness"] = df_copy["Answer.audio-naturalness.label"].apply(lambda x: label_to_num[x])
###Output
_____no_output_____
###Markdown
Seperate our results from Face2Speech results
###Code
ours = df_copy[df_copy["Input.audio_url"].str.contains("bjoernpl/ThesisSurveyFiles/blob/main/ours/")]
theirs = df_copy[df_copy["Input.audio_url"].str.contains("DeNA/Face2Speech//blob/master/docs")]
both = pd.DataFrame({
"ours" : ours["naturalness"].value_counts(),
"theirs" : theirs["naturalness"].value_counts()
}).sort_index()
both
###Output
_____no_output_____
###Markdown
Calculate Mean and 95%
###Code
m = ours["naturalness"].mean()
l,t = st.t.interval(alpha=0.95, df=len(ours)-1, loc=m, scale=st.sem(ours["naturalness"]))
print(f"Mean naturalness rating: {float(m):.2f} +- {float(m-l):.2f}")
m = theirs["naturalness"].mean()
l,t = st.t.interval(alpha=0.95, df=len(theirs)-1, loc=m, scale=st.sem(theirs["naturalness"]))
print(f"Mean naturalness rating: {float(m):.2f} +- {float(m-l):.2f}")
###Output
Mean naturalness rating: 3.50 +- 0.08
###Markdown
Check if participants only rating some samples had influence
###Code
n_per_p = ours.groupby("WorkerId")
a = n_per_p["naturalness"].count()[n_per_p["naturalness"].count() > 5]
out = ours[ours["WorkerId"].isin(a.keys())]
out["naturalness"].mean()
n_per_p = theirs.groupby("WorkerId")
a = n_per_p["naturalness"].count()[n_per_p["naturalness"].count() > 5]
out = theirs[theirs["WorkerId"].isin(a.keys())]
out["naturalness"].mean()
###Output
_____no_output_____
###Markdown
Normal
###Code
evaluateModel(masks, results)
###Output
evaluating: 100%|██████████| 136/136 [00:01<00:00, 101.34it/s]
###Markdown
Comp vision techniques
###Code
def chooseComponent(image, j):
image = image.astype('uint8')
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(image, connectivity=4)
sizes = stats[:, -1]
max_label = 1
if len(sizes) < 3:
return image
max_size = sizes[1]
for i in range(2, nb_components):
if sizes[i] > max_size:
max_label = i
max_size = sizes[i]
new_img = np.zeros(output.shape)
new_img[output == max_label] = 1
return new_img
results_one_comp = []
for i, res in enumerate(tqdm(results, desc='Removing components')):
results_one_comp.append(chooseComponent(res, i))
evaluateModel(masks, results_one_comp)
###Output
Removing components: 100%|██████████| 136/136 [00:00<00:00, 7723.79it/s]
evaluating: 100%|██████████| 136/136 [00:01<00:00, 101.88it/s]
###Markdown
TTA
###Code
params = dict(
h_flip=True,
v_flip=True,
h_shift=(10, -10),
v_shift=(10, -10),
rotation=(90, 180, 270),
merge='mean')
tta_model = tta_segmentation(model, **params)
results = []
test_gen = getGenerator(images, bs=1)
results = tta_model.predict_generator(test_gen, len(images), verbose = 1)
evaluateModel(masks, results)
###Output
evaluating: 100%|██████████| 136/136 [00:01<00:00, 96.42it/s]
###Markdown
Import dataset
###Code
from google.colab import drive
import os
drive.mount('/content/GoogleDrive', force_remount=True)
path = '/content/GoogleDrive/My Drive/Vietnamese Foods'
os.chdir(path)
!ls
# Move dataset to /tmp cause reading files from Drive is very slow
!cp Dataset/vietnamese-foods-split.zip /tmp
!unzip -q /tmp/vietnamese-foods-split.zip -d /tmp
###Output
_____no_output_____
###Markdown
Check GPU working
###Code
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0': raise SystemError('GPU device not found')
print('Found GPU at:', device_name)
###Output
Found GPU at: /device:GPU:0
###Markdown
Preparing data
###Code
TRAIN_PATH = '/tmp/Images/Train'
VALIDATE_PATH = '/tmp/Images/Validate'
TEST_PATH = '/tmp/Images/Test'
MODELS_PATH = 'Models'
BEST_MODEL = 'fine_tune_model_best.hdf5'
IMAGE_SIZE = (300, 300)
BATCH_SIZE = 128
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_generator = ImageDataGenerator(
rescale = 1./255,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True
)
validate_generator = ImageDataGenerator(rescale=1./255)
test_generator = ImageDataGenerator(rescale=1./255)
generated_train_data = train_generator.flow_from_directory(TRAIN_PATH, target_size=IMAGE_SIZE, batch_size=BATCH_SIZE)
generated_validate_data = validate_generator.flow_from_directory(VALIDATE_PATH, target_size=IMAGE_SIZE, batch_size=BATCH_SIZE)
generated_test_data = test_generator.flow_from_directory(TEST_PATH, target_size=IMAGE_SIZE)
###Output
Found 17581 images belonging to 30 classes.
Found 2515 images belonging to 30 classes.
Found 5040 images belonging to 30 classes.
###Markdown
Evaluation
###Code
from tensorflow.keras.models import load_model
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.metrics import TopKCategoricalAccuracy
from tqdm.notebook import tqdm
validate_results = {}
test_results = {}
for folder in tqdm(os.listdir(MODELS_PATH)):
model_folder = os.path.join(MODELS_PATH, folder)
if BEST_MODEL in os.listdir(model_folder):
print('\n========== Evaluate', folder, 'Model ==========')
model = load_model(os.path.join(model_folder, BEST_MODEL))
model.compile(
optimizer = SGD(learning_rate=1e-4, momentum=0.9),
loss = 'categorical_crossentropy',
metrics = [
'accuracy',
TopKCategoricalAccuracy(k=3, name='top_3_accuracy'),
TopKCategoricalAccuracy(k=5, name='top_5_accuracy')
]
)
print('Validate dataset:')
validate_results[folder] = model.evaluate(generated_validate_data)
print('Test dataset:', )
test_results[folder] = model.evaluate(generated_test_data)
validate_report = pd.DataFrame.from_dict(validate_results, orient='index').iloc[:, 1:]
validate_report.columns = ['Accuracy', 'Top 3 Accuracy', 'Top 5 Accuracy']
validate_report.sort_values(by=['Accuracy'], ascending=False)
test_report = pd.DataFrame.from_dict(test_results, orient='index').iloc[:, 1:]
test_report.columns = ['Accuracy', 'Top 3 Accuracy', 'Top 5 Accuracy']
test_report.sort_values(by=['Accuracy'], ascending=False)
###Output
_____no_output_____
###Markdown
Load experiment data
###Code
# e_name = 'mnist_gan2_mcc_experiment_2574'
# e_name = 'experiment_8944'
e_name = 'test'
# Import results
# with open('experiments/experiment_' + str(eid) + '.json') as f:
with open('experiments/' + str(e_name) + '.json') as f:
log = json.loads(f.read())
print('Loaded experiment log with id ' + str(log['id']))
# print('Performed attacks: {0}'.format(log['attacks']))
# print('Repetitions: {0}'.format(len(log[log['attacks'][0]]['results'])))
attacks = ['legacy_pca_mc_category_attack_1000', 'legacy_pca_mc_category_attack_10000']
attacks = log['attacks']
# Collect and aggregate result values
values = dict()
for attack in attacks: #log['attacks']:
values[attack] = dict()
results = log[attack]['results']
# print(results)
config_name = log[attack]['base_config']
# config_name = 'pca_mc_category_attack'
with open('configs/' + str(config_name) + '.json') as f:
attack_config = json.loads(f.read())
with open('configs/' + str(log['model']) + '.json') as f:
model_config = json.loads(f.read())
# print(attack_config)
# print(model_config)
# print(log[attack]['attack_type'])
if 'mc_attack' in log[attack]['attack_type'] or 'mc_category_attack' in log[attack]['attack_type']:
# Heuristics
heuristics = results[0].keys()
for heuristic in heuristics:
values[attack][heuristic] = dict()
values[attack][heuristic]['single'] = list()
values[attack][heuristic]['set'] = list()
# Single MI Accuracy
single_mi_acc = calc_mc_accuracy(heuristic, 'mc_attack_log_acc', results)
values[attack][heuristic]['single'].append(single_mi_acc)
# print(heuristic, single_mi_acc)
# Set MI Accuracy
set_mi_acc = calc_mc_set_accuracy(heuristic, 'mc_attack_log_acc', results)
values[attack][heuristic]['set'].append(set_mi_acc)
# print(set_mi_acc)
elif log[attack]['attack_type'] == 'reconstruction_attack':
values[attack]['reconstruction_accuracy_single'] = np.mean([x['reconstruction_accuracy'] for x in results])
values[attack]['reconstruction_accuracy_set'] = (sum([x['successful_set_attack'] for x in results]) / len(results))
values
###Output
_____no_output_____
###Markdown
Plots for Monte Carlo Attacks Create Plot for different Sample Sizes
###Code
# Show single membership inference results
import matplotlib.pyplot as plt
import numpy as np
heuristics = results[0].keys()
attacks = values.keys()
# xlabels = ['10\u00b3', '10\u2074', '10\u2075', '10\u2076']
xlabels = ['10\u00b3', '10\u2074']
plt.xticks(range(len(xlabels)), xlabels)
#plt.figure(figsize=(10, 6))
for heuristic in heuristics:
# Aggregate results
mean_single_accs = [np.array(values[attack][heuristic]['single']).mean() for attack in attacks]
if heuristic != 'median':
label = '{:3g}% percentile'.format(float(heuristic) * 10)
else:
label = 'Median Heuristic'
plt.plot(range(len(xlabels)), mean_single_accs, marker='o', label=label)
plt.xlabel('Sample Size')
plt.ylabel('Single MI Accuracy')
plt.legend(loc='upper right', bbox_to_anchor=(1.01, 1.2), ncol=2);
plt.tight_layout()
#plt.savefig('figures/mnist_gan_mc_category_single_accs.png')
# Show set membership inference results
heuristics = results[0].keys()
attacks = values.keys()
x = ['10\u00b3', '10\u2074', '10\u2075', '10\u2076']
#print(heuristics)
#print(attacks)
#plt.xlim(100, 1000000)
plt.xticks(range(len(x)), x)
plt.xlabel('Sample Size')
plt.ylabel('Set MI Accuracy')
for heuristic in heuristics:
# Aggregate results
mean_set_accs = [np.array(values[attack][heuristic]['set']).mean() for attack in attacks]
if heuristic != 'median':
label = '{:3g}% percentile'.format(float(heuristic) * 10)
else:
label = 'Median Heuristic'
plt.plot(range(len(x)), mean_set_accs, marker='o', label=label)
# plt.figure(figsize=(10, 6))
plt.legend(loc='upper right', bbox_to_anchor=(1.01, 1.2), ncol=2);
plt.tight_layout()
#plt.savefig('figures/mnist_gan_mc_category_set_accs.png')
###Output
_____no_output_____
###Markdown
Create Boxplot for different Metrics (over multiple Runs)
###Code
# Show single membership inference results
import matplotlib.pyplot as plt
import numpy as np
heuristics = results[0].keys()
attacks = values.keys()
#xlabels = ['10\u00b3', '10\u2074', '10\u2075', '10\u2076']
#xlabels = [1000, 10000, 100000, 1000000]
x = list()
for heuristic in heuristics:
x.append([e[heuristic]['mc_attack_log_acc'] for e in log['pca_mc_category_attack_1000']['results']])
plt.boxplot(x)
plt.ylabel('Accuracy')
plt.xlabel('Metric')
plt.xticks(range(1, len(heuristics) + 1), heuristics)
# plt.show()
# plt.savefig('test.png')
# Show single membership inference results
import matplotlib.pyplot as plt
import numpy as np
heuristics = log['results'][0].keys()
x = list()
for heuristic in heuristics:
x.append([e[heuristic]['mc_attack_log_acc'] for e in log['results']])
plt.figure(figsize=(10, 6))
plt.boxplot(x)
plt.ylabel('Accuracy')
plt.xlabel('Metric')
plt.xticks(range(1, len(heuristics) + 1), heuristics)
plt.show()
# plt.savefig('test.png')
###Output
_____no_output_____
###Markdown
Barplot for Set MI (over multiple Runs)
###Code
import matplotlib.pyplot as plt
import numpy as np
heuristics = log['results'][0].keys()
x = list()
for heuristic in heuristics:
x.append(sum([e[heuristic]['successful_set_attack_log'] for e in log['results']]) / len(log['results']))
plt.figure(figsize=(10, 6))
plt.bar(height=x, x=heuristics)
plt.ylabel('Set Accuracy')
plt.xlabel('Metric')
plt.show()
###Output
_____no_output_____
###Markdown
Plots for Reconstruction Attack
###Code
x = [int(x.replace('recon_attack_', '')) for x in log['attacks']]
y = [v['reconstruction_accuracy_single'] for v in values.values()]
plt.xticks(range(len(x)), x)
plt.ylim(0.4, 0.8)
plt.plot(range(len(x)), y, marker='o')
plt.xlabel('No. of Repetitions of Reconstructions')
plt.ylabel('Single MI Reconstruction Accuracy')
plt.title('Reconstruction Attack on mnist_vae3')
# plt.show()
plt.savefig('mnist_vae3_recon_attack_single_mi')
x = [int(x.replace('recon_attack_', '')) for x in log['attacks']]
y = [v['reconstruction_accuracy_set'] for v in values.values()]
plt.xticks(range(len(x)), x)
plt.ylim(0, 1.1)
plt.plot(range(len(x)), y, marker='o')
plt.xlabel('No. of Repetitions of Reconstructions')
plt.ylabel('Set MI Reconstruction Accuracy')
plt.title('Reconstruction Attack on mnist_vae3')
# plt.show()
plt.savefig('mnist_vae3_recon_attack_set_mi')
###Output
_____no_output_____
###Markdown
Compare multiple experiments
###Code
experiment_ids = [9382]
logs = list()
values = dict()
for eid in experiment_ids:
# Import results
with open('experiments/experiment_' + str(eid) + '.json') as f:
log = json.loads(f.read())
#logs.append(log)
print('Loaded experiment log with id ' + str(eid))
print('Performed attacks: {0}'.format(log['attacks']))
print('Repetitions: {0}'.format(len(log[log['attacks'][0]]['results'])))
attack = list(log.keys())[-1] # does this always work?
results = log[attack]['results']
config_name = log[attack]['base_config']
with open('configs/' + str(config_name) + '.json') as f:
attack_config = json.loads(f.read())
with open('configs/' + str(attack_config['model_config']) + '.json') as f:
model_config = json.loads(f.read())
model_type = model_config['type']
base_attack_name = attack_config['attack_type']
if base_attack_name not in values.keys():
values[base_attack_name] = dict()
if 'mc_attack' in log[attack]['attack_type']:
#Only use median heuristic
heuristic = 'median_perc'
values[base_attack_name][model_type] = dict()
values[base_attack_name][model_type]['single'] = dict()
values[base_attack_name][model_type]['set'] = dict()
# Single MI Accuracy
single_mi_acc = calc_mc_accuracy(heuristic, '50_perc_mc_attack_log', results)
values[base_attack_name][model_type]['single']['mean'] = single_mi_acc
values[base_attack_name][model_type]['single']['std'] = np.std([x[heuristic]['50_perc_mc_attack_log'] for x in results])
# print(heuristic, single_mi_acc)
# Set MI Accuracy
set_mi_acc = calc_mc_set_accuracy(heuristic, '50_perc_mc_attack_log', results)
values[base_attack_name][model_type]['set']['mean'] = set_mi_acc
accuracies = np.array([x[heuristic]['50_perc_mc_attack_log'] for x in results])
advantages = calc_advantage(accuracies)
probabilities = np.array(list(map(calc_probability, advantages)))
values[base_attack_name][model_type]['set']['std'] = np.std(probabilities)
# print(set_mi_acc)
else:
# reconstruction attack
pass
def calc_probability(advantage):
if advantage > 0:
prob = 1
elif advantage == 0:
prob = 0.5
elif advantage < 0:
prob = 0
return prob
def calc_advantage(accuracies):
advantages = (accuracies - 0.5) * 2
return advantages
values
import numpy as np
import matplotlib.pyplot as plt
N = 5
menMeans = (20, 35, 30, 35, 27)
menStd = (2, 3, 4, 1, 2)
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
fig = plt.figure()
ax = fig.add_subplot(111)
rects1 = ax.bar(ind, menMeans, width, color='royalblue', yerr=menStd)
womenMeans = (25, 32, 34, 20, 25)
womenStd = (3, 5, 2, 3, 3)
rects2 = ax.bar(ind+width, womenMeans, width, color='seagreen', yerr=womenStd)
# add some
ax.set_ylabel('Scores')
ax.set_title('Scores by group and gender')
ax.set_xticks(ind + width / 2)
ax.set_xticklabels( ('G1', 'G2', 'G3', 'G4', 'G5') )
ax.legend( (rects1[0], rects2[0]), ('Men', 'Women') )
log
###Output
_____no_output_____
###Markdown
EvaluationEvaluate model prediction for an entire play.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sys, os
sys.path.append('/home/ec2-user/SageMaker/helmet_detection/src')
from helmet_detection_model.detector import ObjectDetector
video_in = '/home/ec2-user/SageMaker/helmet_detection/input/train/57583_000082_Endzone.mp4'
model_path = '/home/ec2-user/SageMaker/helmet_detection/model/model_helmet_frcnn.pt'
gtfile_name = '/home/ec2-user/SageMaker/helmet_detection/input/train_labels.csv'
full_video = True
subset_video = 4
conf_thres=0.9
iou_threshold = 0.25
num_classes = 2
# %%time
# detections, eval_det, fns, fps = ObjectDetector.run_detection_eval_video(video_in, gtfile_name,
# model_path, full_video,
# subset_video, conf_thres,
# iou_threshold)
# eval_det.describe()
###Output
_____no_output_____
###Markdown
Draw detection errors on frames
###Code
# eval_det.to_csv("/home/ec2-user/SageMaker/helmet_detection/output/eval_det.csv", index=False)
# fns.to_csv("/home/ec2-user/SageMaker/helmet_detection/output/fns.csv", index=False)
# fps.to_csv("/home/ec2-user/SageMaker/helmet_detection/output/fps.csv", index=False)
eval_det = pd.read_csv("/home/ec2-user/SageMaker/helmet_detection/output/eval_det.csv")
fns = pd.read_csv("/home/ec2-user/SageMaker/helmet_detection/output/fns.csv")
fps = pd.read_csv("/home/ec2-user/SageMaker/helmet_detection/output/fps.csv")
fn_thres = 3
fp_thres = 3
# # list of frames with fn>=fn_thres and fp>=fp_thres
frame_list = eval_det[(eval_det['fn'] >= fn_thres) & (eval_det['fp'] >= fp_thres)]['frame_id'].tolist()
## frame_list = ObjectDetector.find_frames_high_fn_fp(eval_det, fn_thres, fp_thres)
# # list of frames with no fn and fp
# frame_list = eval_det[(eval_det['fn'] == 0) & (eval_det['fp'] == 0)]['frame_id'].tolist()
# list of frames with more than 5 fn
# frame_list = eval_det[(eval_det['fn'] > 5)]['frame_id'].tolist()
print(frame_list)
fns.shape
!rm /home/ec2-user/SageMaker/helmet_detection/output/out_images/*
success = ObjectDetector.draw_detect_error(video_in, gtfile_name, full_video, subset_video, frame_list, fns, fps)
success
###Output
_____no_output_____
###Markdown
Get %frame with no fn and fp, fn=1, fn between 2 and 5, and fn more than 5
###Code
df_good = eval_det[(eval_det['fn'] == 0) & (eval_det['fp'] == 0)]
print(df_good.shape)
print(100*(df_good.shape[0]/eval_det.shape[0]))
df_fn_1 = eval_det[(eval_det['fn'] == 1)]
print(df_fn_1.shape)
print(100*(df_fn_1.shape[0]/eval_det.shape[0]))
df_fn_2_5 = eval_det[(eval_det['fn'] >= 2) & (eval_det['fn'] <= 5)]
print(df_fn_2_5.shape)
print(100*(df_fn_2_5.shape[0]/eval_det.shape[0]))
df_fn_5 = eval_det[(eval_det['fn'] > 5)]
print(df_fn_5.shape)
print(100*(df_fn_5.shape[0]/eval_det.shape[0]))
df_fn_5
eval_det["precision"] = eval_det.apply(lambda row: row.tp/(row.tp + row.fp), axis=1)
eval_det["recall"] = eval_det.apply(lambda row: row.tp/(row.tp + row.fn), axis=1)
eval_det["f1_score"] = eval_det.apply(lambda row: (2 * row.precision * row.recall)/(row.precision + row.recall), axis=1)
eval_det.head()
# Calculate total number of helmets, tp, fn, fp, precision, recall, and F1 score
total_gt = eval_det['num_object_gt'].sum()
total_tp = eval_det['tp'].sum()
total_fn = eval_det['fn'].sum()
total_fp = eval_det['fp'].sum()
total_precision = total_tp/(total_tp+total_fp)
total_recall = total_tp/(total_tp+total_fn)
total_f1 = 2*total_precision*total_recall/(total_precision+total_recall)
total_gt, total_tp, total_fn, total_fp, total_precision, total_recall, total_f1
###Output
_____no_output_____
###Markdown
This experiments on MNIST by creating multiple views of the dataset.As the paper deals with binary classification, we use digits 0 to 4 in class 0 and 5 to 9 in class 1
###Code
import pandas as pd
import numpy as np
from scipy.signal import convolve2d
from scipy.fft import ifftn
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from models.boostSH import BoostSH
from models.rboostSH import RBoostSH
###Output
_____no_output_____
###Markdown
Extract features on dataset The paper compute features on only a subset of the data by extracting 100 datapoints for each set
###Code
from sklearn.datasets import load_digits
data, target = load_digits(return_X_y = True)
data = pd.DataFrame(data)
target = pd.Series(target)
###Output
_____no_output_____
###Markdown
Labels
###Code
# Transform labels
target = target > 4
###Output
_____no_output_____
###Markdown
Subset selection
###Code
keep = []
for c in target.unique():
keep += target[target == c].sample(100).index.tolist()
np.random.shuffle(keep)
data, target = data.loc[keep], target.loc[keep]
###Output
_____no_output_____
###Markdown
Computation views 6 views are computed in the paper- Fourier coefficient- Correlations- Average 2 x 3 window- Zernike moments- Morphological features- Karhunen coefficientWe focus on only the three first as we didn't find standard implementation of those methods
###Code
views = {'original': data}
images = data.values.reshape([-1, 8, 8])
views['Fourier'] = pd.DataFrame([np.real(ifftn(i)).flatten() for i in images],
index = data.index).fillna(1)
views['Correlations'] = pd.DataFrame([np.concatenate([np.corrcoef(i)[np.triu_indices(8, 1)],
np.corrcoef(i.T)[np.triu_indices(8, 1)]]) for i in images],
index = data.index).fillna(1)
views['Convolution'] = pd.DataFrame([convolve2d(i, np.ones((2, 3)), 'valid').flatten() for i in images],
index = data.index).fillna(1)
###Output
_____no_output_____
###Markdown
Experiment
###Code
cv = 30
###Output
_____no_output_____
###Markdown
Evaluation each view
###Code
for v in views:
score = cross_val_score(AdaBoostClassifier(DecisionTreeClassifier(), n_estimators = 100), views[v], target, cv = cv, scoring = 'roc_auc')
mean, ci = np.mean(score), 1.96 * np.std(score) / np.sqrt(cv)
print("View {} achieves {:.2f} ({:.2f} - {:.2f}) AUC".format(v, mean, mean - ci, mean + ci))
###Output
View original achieves 0.76 (0.70 - 0.83) AUC
View Fourier achieves 0.74 (0.68 - 0.80) AUC
View Correlations achieves 0.81 (0.77 - 0.86) AUC
View Convolution achieves 0.79 (0.74 - 0.84) AUC
###Markdown
Early fusion
###Code
score = cross_val_score(AdaBoostClassifier(DecisionTreeClassifier(), n_estimators = 100), pd.concat(views, axis = 'columns'), target, cv = cv, scoring = 'roc_auc')
mean, ci = np.mean(score), 1.96 * np.std(score) / np.sqrt(cv)
print("Early fusion achieves {:.2f} ({:.2f} - {:.2f}) AUC".format(mean, mean - ci, mean + ci))
###Output
View Convolution achieves 0.90 (0.86 - 0.93) AUC
###Markdown
Algorithms Boost.SH
###Code
%%time
score = cross_val_score(BoostSH(DecisionTreeClassifier(), views, 100), views['original'], target, cv = cv, scoring = 'roc_auc', fit_params = {'edge_estimation_cv': 5})
mean, ci = np.mean(score), 1.96 * np.std(score) / np.sqrt(cv)
print("Boost.SH achieves {:.2f} ({:.2f} - {:.2f}) AUC".format(mean, mean - ci, mean + ci))
###Output
Boost.SH achieves 0.94 (0.90 - 0.97) AUC
CPU times: user 7min 12s, sys: 78.6 ms, total: 7min 12s
Wall time: 7min 12s
###Markdown
rBoost.SH
###Code
%%time
score = cross_val_score(RBoostSH(DecisionTreeClassifier(), views, 100), views['original'], target, cv = cv, scoring = 'roc_auc', fit_params = {'edge_estimation_cv': 5}, error_score='raise')
mean, ci = np.mean(score), 1.96 * np.std(score) / np.sqrt(cv)
print("rBoost.SH achieves {:.2f} ({:.2f} - {:.2f}) AUC".format(mean, mean - ci, mean + ci))
###Output
Boost.SH achieves 0.96 (0.93 - 0.99) AUC
CPU times: user 1min 57s, sys: 19.9 ms, total: 1min 57s
Wall time: 1min 57s
###Markdown
Jupyter Setup
###Code
# automatically reload imported modules on use
%load_ext autoreload
%autoreload 2
# plot inline
%matplotlib inline
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
from gym.envs.classic_control import PendulumEnv
from forward_models.model import Normalizer, ForwardModel
from forward_models.rollout import Rollout
###Output
_____no_output_____
###Markdown
Setup
###Code
# enable TF Eager
tf.enable_eager_execution()
# define job directory with saved checkpoints
job_dir = '/Users/fomoro/jobs/forward_models/1543426763'
max_episode_steps = 200
episodes = 4
# create an environment
env = PendulumEnv()
# create a rollout
rollout = Rollout(env, max_episode_steps=max_episode_steps)
# sample rollouts
states, actions, rewards, next_states, weights = rollout(
lambda state: env.action_space.sample(),
episodes=episodes)
# compute deltas between the next state and the current state
deltas = next_states - states
# create normalizers for the features and targets
# NOTE: it's important that the statistics match those used during training
# they will be restored from the checkpoint
state_normalizer = Normalizer(
loc=states.mean(axis=(0, 1)),
scale=states.std(axis=(0, 1)))
delta_normalizer = Normalizer(
loc=deltas.mean(axis=(0, 1)),
scale=deltas.std(axis=(0, 1)))
action_normalizer = Normalizer(
loc=actions.mean(axis=(0, 1)),
scale=actions.std(axis=(0, 1)))
# create the forward model
model = ForwardModel(output_units=env.observation_space.shape[-1])
# create a checkpoint with references to all objects to restore
checkpoint = tf.train.Checkpoint(
state_normalizer=state_normalizer,
delta_normalizer=delta_normalizer,
action_normalizer=action_normalizer,
model=model)
# restore the latest checkpoint in job_dir
checkpoint_path = tf.train.latest_checkpoint(job_dir)
assert checkpoint_path is not None, 'job_dir must contain checkpoint'
checkpoint.restore(checkpoint_path)
###Output
_____no_output_____
###Markdown
Instantaneous EvaluationThe instantaneous evaluation is the simplest form of evaluation. For each step, predict the next state given a _ground truth_ state and action. Typically we only use this for spot-checking the predictions as it does not reflect the intended usage of the forward model.
###Code
# normalize features
states_norm = state_normalizer(states)
actions_norm = action_normalizer(actions)
# compute a forward pass while resetting the RNN state
deltas_norm_pred = model(states_norm, actions_norm, training=False, reset_state=True)
# de-normalize the predicted delta
deltas_pred = delta_normalizer.invert(deltas_norm_pred)
# add the prior states to the unnormalized deltas
next_states_pred = states + deltas_pred.numpy()
# plot the instantaneous predictions for each episode and state
state_size = env.observation_space.shape[-1]
fig, axes = plt.subplots(episodes, state_size, figsize=(12, 8))
for state_dim in range(state_size):
for episode in range(episodes):
ax = axes[episode, state_dim]
ax.plot(next_states[episode, :, state_dim], label='Real')
ax.plot(next_states_pred[episode, :, state_dim], label='Predicted')
ax.legend(loc='lower right')
ax.set_title('State: {}, Episode: {}'.format(state_dim, episode))
sns.despine()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Rollout EvaluationThe rollout evaluation is the most important because it mimics the usage of the forward model as an environment for an agent. For the first timestep, predict the next state given a ground truth state and action. For all subsequent steps, predict the next state given the previously predicted state and a ground truth action. This evaluation stresses the temporal generalization of the model. A good rollout is accurate for some number of steps before diverging from the ground truth states.
###Code
# initialize the current state and action from data
curr_state = states[:, 0][:, None]
curr_action = actions[:, 0][:, None]
next_states_pred_list = []
for step in range(max_episode_steps):
# normalize the features
curr_state_norm = state_normalizer(curr_state)
curr_action_norm = action_normalizer(curr_action)
# reset the RNN state on the first step, but not subsequent steps of the episode
reset_state = (step == 0)
# compute a forward pass
curr_delta_norm_pred = model(
curr_state_norm,
curr_action_norm,
training=False,
reset_state=reset_state)
# de-normalize the predicted delta
curr_delta_pred = delta_normalizer.invert(curr_delta_norm_pred)
# add the prior states to the unnormalized deltas
curr_pred = curr_state + curr_delta_pred
next_states_pred_list.append(curr_pred.numpy())
# set the current state to the predicted next state and set the current action from data
curr_state = curr_pred
curr_action = actions[:, step][:, None]
next_states_pred = np.concatenate(next_states_pred_list, axis=1)
next_states_pred.shape
# plot the rolled out predictions for each episode and state
state_size = env.observation_space.shape[-1]
fig, axes = plt.subplots(episodes, state_size, figsize=(12, 8))
for state_dim in range(state_size):
for episode in range(episodes):
ax = axes[episode, state_dim]
ax.plot(next_states[episode, :, state_dim], label='Real')
ax.plot(next_states_pred[episode, :, state_dim], label='Predicted')
ax.legend(loc='lower right')
ax.set_title('State: {}, Episode: {}'.format(state_dim, episode))
sns.despine()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Build Dataset
###Code
METADATA_TRAIN = pd.read_csv("place where training set's METADATA.csv is")
METADATA_TEST = pd.read_csv("place where test set's METADATA.csv is")
METADATA_HOLDOUT = pd.read_csv("place where holdout set's METADATA.csv is")
TRAIN_RESULTS_PATH = "place where your train results from trojai_runner.py were saved to"
TEST_RESULTS_PATH = "place where your test results from trojai_runner.py were saved to"
HOLDOUT_RESULTS_PATH = "place where your holdout results from trojai_runner.py were saved to"
THICK_NAMES = ["clean", "adv+to-", "adv-to+", "uap+to-", "uap-to+"]
TILT_NAMES = ["adv_adv+to-", "adv_adv-to+", "uap_uap+to-", "uap_uap-to+"]
FEATURE_QUANTILES = [0, 1]
embedding_codes = {"BERT": 0, "DistilBERT": 1, "GPT-2": 2}
embedding_lookups = {0: "BERT", 1: "DistilBERT", 2: "GPT-2"}
architecture_codes = {"LstmLinear": 0, "GruLinear": 1}
architecture_lookups = { 0: "LstmLinear", 1: "GruLinear"}
def load_all(results_path, embed, arch, model_id, which):
with torch.no_grad():
thicks, tilts, losses = [], [], []
for suffix in THICK_NAMES:
thicks.append(torch.load(os.path.join(results_path, embed, arch,
which + suffix + "_thickness{}.pt".format(model_id))))
for suffix in TILT_NAMES:
tilts.append(torch.load(os.path.join(results_path, embed, arch,
which + suffix + "_tilting{}.pt".format(model_id))))
for suffix in LOSS_NAMES:
losses.append(torch.load(os.path.join(results_path, embed, arch,
which + "_{0}{1}.pt".format(suffix, model_id))))
return thicks, tilts, losses
def make_thick_features(thicks):
thick_features = []
for thick_direction in thicks:
for i in [1, 2]:
thickness_dist = thick_direction[i]
thickness_dist = thickness_dist[thickness_dist > 0].detach().clone().cpu() # filter out 0's
thick_features.append(quantile_features(thickness_dist, FEATURE_QUANTILES).numpy())
thick_features.append(moment_features(thickness_dist).numpy())
return np.concatenate(thick_features)
def make_tilt_features(tilts):
tilt_features = []
for tilting_dist in tilts:
tilting_dist = tilting_dist.detach().clone().cpu()
tilt_features.append(quantile_features(tilting_dist, FEATURE_QUANTILES).numpy())
tilt_features.append(moment_features(tilting_dist).numpy())
return np.concatenate(tilt_features)
def make_data(results_path, embed, arch, add_embed_feat, add_arch_feat, METADATA):
clean_model_ids = METADATA.index[(METADATA.embedding==embed) & (METADATA.model_architecture==arch) & (METADATA.poisoned==False)].tolist()
poisoned_model_ids = METADATA.index[(METADATA.embedding==embed) & (METADATA.model_architecture==arch) & (METADATA.poisoned==True)].tolist()
# Load data
clean_features, poisoned_features = [], []
for model_id in clean_model_ids:
try:
thicks, tilts, losses = load_all(results_path, embed, arch, model_id, "clean")
except FileNotFoundError:
print(model_id)
continue
thick_feats, tilt_feats = make_thick_features(thicks), make_tilt_features(tilts)
clean_features.append(np.concatenate((thick_feats, tilt_feats, losses)))
for model_id in poisoned_model_ids:
try:
thicks, tilts, losses = load_all(results_path, embed, arch, model_id, "poisoned")
except FileNotFoundError:
print(model_id)
continue
thick_feats, tilt_feats = make_thick_features(thicks), make_tilt_features(tilts)
poisoned_features.append(np.concatenate((thick_feats, tilt_feats, losses)))
# Build data matrix
clean_features, poisoned_features = np.array(clean_features), np.array(poisoned_features)
n_clean, n_poisoned = clean_features.shape[0], poisoned_features.shape[0]
X = np.concatenate((clean_features, poisoned_features), axis=0)
y = np.concatenate((np.zeros(n_clean), np.ones(n_poisoned)))
# Add categorical features
if add_embed_feat:
X = np.concatenate((X, embedding_codes[embed] * np.ones((X.shape[0], 1))), axis=1)
if add_arch_feat:
X = np.concatenate((X, architecture_codes[arch] * np.ones((X.shape[0], 1))), axis=1)
return X, y
def make_full_X_y(results_path, metadata):
with torch.no_grad():
X, y = [], []
feature_names = []
for embed in ["BERT", "DistilBERT", "GPT-2"]:
for arch in ["LstmLinear", "GruLinear"]:
X_cache = []
curr_X, curr_y = make_data(results_path, embed, arch, True, True, metadata)
X_cache.append(curr_X)
X.append(np.concatenate(X_cache, axis=1))
y.append(curr_y)
for thick_name in THICK_NAMES:
for ab_str in ["0_0.75", "0_1"]:
for q in FEATURE_QUANTILES:
feature_names.append("thick_" + thick_name + ab_str + "_q" + str(q))
for m in range(1, 5):
feature_names.append("thick_" + thick_name + ab_str + "_m" + str(m))
for tilt_name in TILT_NAMES:
for q in FEATURE_QUANTILES:
feature_names.append("tilt_" + tilt_name + "_q" + str(q))
for m in range(1, 5):
feature_names.append("tilt_" + tilt_name + "_m" + str(m))
for loss_name in LOSS_NAMES:
feature_names.append("loss_" + loss_name)
feature_names.append("embedding")
feature_names.append("architecture")
feature_names = np.array(feature_names)
X = np.concatenate(X, axis=0)
y = np.concatenate(y, axis=0)
return X, y, feature_names
X, y, feature_names = make_full_X_y(TRAIN_RESULTS_PATH, METADATA_TRAIN)
print(X.shape, y.shape)
X_test, y_test, feature_names = make_full_X_y(TEST_RESULTS_PATH, METADATA_TEST)
print(X_test.shape, y_test.shape)
X_holdout, y_holdout, feature_names = make_full_X_y(HOLDOUT_RESULTS_PATH, METADATA_HOLDOUT)
print(X_holdout.shape, y_holdout.shape)
print("Number of features:", len(feature_names))
###Output
_____no_output_____
###Markdown
Evaluate
###Code
forest_param_grid = {"n_estimators": [64, 128], "max_depth": [4, 6, 8]}
cv_gbf = GridSearchCV(GradientBoostingClassifier(), forest_param_grid)
cv_gbf.fit(X, y)
gbf_final = CalibratedClassifierCV(cv_gbf.best_classifier_, cv=10)
gbf_final.fit(X, y)
def print_results(clf, X_train, y_train, X_test, y_test, X_holdout, y_holdout):
y_test_probs = clf.predict_proba(X_test)
y_holdout_probs = clf.predict_proba(X_holdout)
print("Train Accuracy: {:.3f}".format(clf.score(X_train, y_train)))
print("Accuracy: {:.3f} (Test)\t{:.3f} (Holdout)".format(clf.score(X_test, y_test),
clf.score(X_holdout, y_holdout)))
print("AUC: {:.3f} (Test)\t{:.3f} (Holdout)".format(roc_auc_score(y_test, y_test_probs[:, 1]),
roc_auc_score(y_holdout, y_holdout_probs[:, 1])))
print("CE: {:.3f} (Test)\t{:.3f} (Holdout)\n".format(log_loss(y_test, y_test_probs),
log_loss(y_holdout, y_holdout_probs)))
print_results(gbf_final, X, y, X_test, y_test, X_holdout, y_holdout)
###Output
_____no_output_____
###Markdown
EvaluationHere we will perform statistical tests on the results collected so far.\The following packages should be installed:* pandas* numpy* scipy* numpy* pingouin
###Code
import pandas as pd
import scipy.stats as stats
import numpy.random as rnd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_rows', 500)
###Output
_____no_output_____
###Markdown
Import the utility class from [experiment-evaluation](https://github.com/MarcRuble/experiment-evaluation).\*Note: The file `evaluation.py` needs to be in the same folder as this notebook.*
###Code
from evaluation import DatasetEvaluation
###Output
_____no_output_____
###Markdown
Try out the utility functions.
###Code
# read data
df = pd.read_csv("original-tables/AR_Presence_Results.csv")
#df = pd.read_csv("tables/results.csv")
# create object
evl = DatasetEvaluation(df)
# add a score column
evl.add_mean(['Q1', 'Q2', 'Q3', 'Q4'], 'Score')
# print table ordered by score
#evl.display_sorted('Score', ascending=False)
# check for a normal distribution
evl.check_normal_distribution('Q1')
evl.check_normal_distribution('Score', ('Condition', 'XXS'))
# check for homogene variances
evl.check_homogene_variances('Score', 'Condition')
# check for sphericity
evl.check_sphericity('Score', 'Condition', 'Participant')
# perform friedman test
evl.friedman_test('Score', 'Condition', condition=('Task', 1))
# perform anova test
evl.anova_test('Score', 'Condition', 'Participant', condition=('Task', 2))
# perform wilcoxon post-hoc
evl.save_order('Condition', ['XXS', 'XS', 'S', 'M', 'L', 'XL', 'XXL'])
evl.wilcoxon_test('Score', 'Condition')
evl.wilcoxon_test('Score', 'Condition', condition=('Task', 1), baseline='M')
# perform paired t-test as post-hoc
evl.paired_t_test('Score', 'Condition', 'Participant')
evl.paired_t_test('Score', 'Condition', 'Participant', condition=('Task', 1), baseline='M')
###Output
### Normal Distribution ###
Q1: stat=0.93916, p=2.4613e-07
--> Non-Gaussian
### Normal Distribution ###
Score with ('Condition', 'XXS'): stat=0.96306, p=0.41098
--> Gaussian-like
### Homogeneity of Variances ###
Score between Condition: stat=0.67958, p=0.99492
--> Homogene Variances
### Sphericity ###
Score between Condition for Participant: W=0.016525, chi2=44.22, dof=20, p=0.001922
--> No sphericity given
################
### Friedman ###
################
('Task', 1)
Score between Condition: stat=30.296, p=3.4522e-05
--> Significant effects
#############
### ANOVA ###
#############
('Task', 2)
###Markdown
Common
###Code
print('start')
!pip install pandas
!pip install -U numpy
#!add-apt-repository universe
#!apt update
#!pip3 install cami-amber
import sys
from time import time
import numpy as np
import pandas as pd
def fasta_to_df(path):
with open(path, 'r') as file:
text = file.read()
lines = [line for line in text.split('\n') if len(line) > 0]
s = ''
ids = []
contigs = []
for l in lines:
if(l[0]=='>'):
ids.append(l)
contigs.append(s)
s = ''
else:
s += l
contigs.append(s)
df = pd.DataFrame({'id': ids, 'contig': contigs[1:]})
df['id'] = df['id'].apply(lambda x: x[1:])
return df
###Output
_____no_output_____
###Markdown
Download
###Code
dataset = "metahit"#"airways"
base = 'https://files.codeocean.com/files/verified/86cc2680-280f-4ed3-99ef-4ad0e7d7ff3d_v1.1/data'
!mkdir -p data/{dataset}/
!curl {base}/{dataset}/abundance.npz?download --output data/{dataset}/abundance.npz
#!curl {base}/{dataset}/abundance_old.npz?download --output data/{dataset}/abundance_old.npz
!curl {base}/{dataset}/contigs.fna.gz?download --output data/{dataset}/contigs.fna.gz
!curl {base}/{dataset}/taxonomy.tsv?download --output data/{dataset}/taxonomy.tsv
!curl {base}/{dataset}/reference.tsv?download --output data/{dataset}/reference.tsv
!gunzip -k data/{dataset}/contigs.fna.gz data/{dataset}/
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 183M 100 183M 0 0 107M 0 0:00:01 0:00:01 --:--:-- 107M
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 305M 100 305M 0 0 102M 0 0:00:02 0:00:02 --:--:-- 102M
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 22684 100 22684 0 0 136k 0 --:--:-- --:--:-- --:--:-- 136k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 28.4M 100 28.4M 0 0 43.8M 0 --:--:-- --:--:-- --:--:-- 43.7M
gzip: data/metahit/ is a directory -- ignored
###Markdown
Run VAMB
###Code
!pip install -e .
import vamb
start = time()
!mkdir -p results
!rm -r results/{dataset}
!vamb --outdir results/airways --fasta data/airways/contigs.fna.gz --rpkm data/airways/abundance.npz -o C --cuda
finished = time()
print(finished - start)
!python3 ./src/cmd_benchmark.py --tax data/airways/taxonomy.tsv code/vamb results/airways/clusters.tsv data/airways/reference.tsv > results/airways/benchmark.tsv
!cat results/airways/benchmark.tsv
###Output
Recall
Prec. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.95 0.99
0.3 211 202 199 194 178 159 117 90 59
0.4 198 187 184 180 166 150 109 82 53
0.5 179 170 167 163 151 136 100 76 48
0.6 171 162 159 155 143 128 95 72 46
0.7 165 158 155 151 139 124 92 69 44
0.8 159 153 150 147 136 123 92 69 44
0.9 150 145 142 140 129 117 87 65 43
0.95 143 139 136 135 125 114 84 62 43
0.99 118 115 111 111 104 96 76 56 39
Recall
Prec. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.95 0.99
0.3 148 145 142 139 130 117 83 60 33
0.4 145 142 139 136 127 114 80 57 31
0.5 139 136 133 129 121 109 78 56 31
0.6 132 129 126 122 114 102 74 53 30
0.7 129 126 123 119 111 99 72 51 29
0.8 124 122 119 116 109 99 72 51 29
0.9 116 114 111 109 102 93 67 47 28
0.95 110 109 106 105 99 91 65 45 28
0.99 89 89 85 85 81 76 60 42 26
Recall
Prec. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.95 0.99
0.3 77 74 73 71 67 60 45 30 15
0.4 76 73 72 70 66 59 44 29 13
0.5 74 71 70 68 64 57 43 29 13
0.6 70 67 66 64 60 53 41 27 12
0.7 68 65 64 62 58 51 39 25 11
0.8 65 63 62 59 56 51 39 25 11
0.9 61 59 58 56 53 49 36 22 10
0.95 57 56 55 54 51 47 34 20 10
0.99 42 42 41 41 40 37 30 18 10
###Markdown
Train
###Code
with vamb.vambtools.Reader(f'./data/{dataset}/contigs.fna.gz', 'rb') as filehandle:
tnfs, contignames, lengths = vamb.parsecontigs.read_contigs(filehandle)
rpkms = vamb.vambtools.read_npz(f'./data/{dataset}/abundance.npz')
help(vamb)
vamb.trainvae(outdir="/tmp/",
rpkms=rpkms,
tnfs=tnfs,
nhiddens=[512, 512],
nlatent=32,
alpha=0.15,
beta=200,
dropout=0.2,
cuda=True,
batchsize=128,
nepochs=500,
lrate=0.001,
batchsteps=[25, 75, 150, 300],
logfile="/tmp/log.txt")
start = time()
with open('/tmp/model.pt', 'wb') as modelfile:
vae.trainmodel(dataloader, nepochs=10, modelfile=modelfile, batchsteps=None, logfile=sys.stdout)
finished = time()
print(finished - start)
latent = vae.encode(dataloader)
print(latent.shape)
###Output
(182388, 32)
###Markdown
Read fasta
###Code
contigs = fasta_to_df(f"./data/{dataset}/contigs.fna")
all_number = contigs.shape[0]
contigs["contig_length"] = contigs["contig"].apply(lambda x: len(x))
del contigs['contig']
contigs.head(2)
taxonomy = pd.read_csv(f"./data/{dataset}/taxonomy.tsv", sep='\t',header=None)
taxonomy.columns = ['Strains','Species','Genera']
taxonomy.head(2)
reference = pd.read_csv(f"./data/{dataset}/reference.tsv", sep='\t',header=None)
reference.columns = ['id', 'Strains', 2, 3, 4]
reference = pd.merge(reference, taxonomy, how='inner', on='Strains')
print(reference.shape)
reference.head(2)
reference = pd.merge(reference, contigs, how='inner', on='id')
print(reference.shape)
reference.head(2)
path = f"./data/{dataset}/gold_standard"
reference[['id',
'Strains', #bin_id
'Strains', #tax_id
'contig_length']].to_csv(path, index=None, sep='\t')
with open(path, 'r') as file:
file.readline()
text = file.read()
with open(path, 'w', encoding='utf8') as file:
file.write("@Version:0.9.1\n@SampleID:gsa\n\n@@SEQUENCEID\tBINID\tTAXID\t_LENGTH\n"+ text)
###Output
_____no_output_____
###Markdown
convert cluster.tsv to biobox
###Code
clusters = pd.read_csv(f'./results/{dataset}/clusters.tsv',sep='\t', header=None)
clusters.columns = ['medoid', 'contigs']
clusters = clusters[['contigs', 'medoid']]
clusters.head(2)
df = pd.merge(clusters, contigs, how='inner', left_on='contigs', right_on='id')
df = df.groupby(['medoid']).sum()['contig_length']
medoid = df[df > 200000].index
bins = clusters[clusters['medoid'].isin(medoid)]
path = f"./data/{dataset}/vamb"
bins[bins["medoid"]!= 0].to_csv(path, index=None, sep='\t',header=None)
with open(path, 'r') as file:
text = file.read()
with open(path, 'w', encoding='utf8') as file:
file.write("@Version:0.9.1\n@SampleID:gsa\n\n@@SEQUENCEID\tBINID\n" + text)
!amber.py -g ./data/{dataset}/gold_standard \
-o ./results/{dataset}/ \
./data/{dataset}/vamb
###Output
2021-12-12 20:34:55,670 INFO Found @Version:0.9.1
2021-12-12 20:34:55,670 INFO Found @SampleID:gsa
2021-12-12 20:34:55,671 INFO Found @Version:0.9.1
2021-12-12 20:34:55,794 INFO Found @SampleID:gsa
2021-12-12 20:34:55,794 INFO Loading gsa
2021-12-12 20:34:55,841 INFO Loading gsa
2021-12-12 20:34:56,072 INFO Loading Gold standard
2021-12-12 20:34:56,155 INFO Loading vamb
2021-12-12 20:34:56,246 INFO Creating output directories
2021-12-12 20:34:56,248 INFO Evaluating Gold standard (sample gsa, genome binning)
2021-12-12 20:34:57,289 INFO Evaluating vamb (sample gsa, genome binning)
2021-12-12 20:34:58,137 INFO Saving computed metrics
2021-12-12 20:34:58,235 INFO Creating genome binning plots
2021-12-12 20:35:09,729 INFO Creating HTML page
2021-12-12 20:35:11,221 INFO AMBER finished successfully. All results have been saved to /notebooks/modify_vamb/results/airways
###Markdown
Evaluation metrics* Detection metrics: precision, recall* Segmentation metric: MCC Loading annotation & creating fake "prediction" image
###Code
import os
%pylab inline
from skimage.io import imread
import numpy as np
# Load annotation:
directory = "D:/Adrien/dataset/GlaS"
idx = 12
test_anno = imread(f'{os.path.join(directory, "train")}/train_{idx}_anno.bmp')
plt.figure()
plt.imshow(test_anno)
plt.show()
###Output
_____no_output_____
###Markdown
Creating a fake "prediction" by removing some objects, adding some objects, and deforming everything a bit:
###Code
test_prediction = test_anno.copy()
# Remove some objects
to_remove = np.random.randint(1,test_anno.max()+1,size=(5,))
for idobj in to_remove:
test_prediction[test_anno==idobj] = 0
plt.figure()
plt.imshow(test_prediction)
plt.show()
# Add some objects
from skimage.morphology import disk
max_radius = 50
obj_params = np.random.random((3,3))
for param in obj_params:
obj = disk(int(param[0]*max_radius))
topleft = (int(param[1]*(test_anno.shape[0]-obj.shape[0])),int(param[2]*(test_anno.shape[1]-obj.shape[1])))
region = test_prediction[topleft[0]:topleft[0]+obj.shape[0], topleft[1]:topleft[1]+obj.shape[1]]
region[region==0] = obj[region==0]*(test_prediction.max()+1)
plt.figure()
plt.imshow(test_prediction)
plt.show()
# Random deformations
from skimage.morphology import opening,closing
for idobj in np.unique(test_prediction[test_prediction>0]):
obj = test_prediction==idobj
print(idobj, end='\r')
r = np.random.randint(-40, 40)
if r < 0:
obj = opening(obj, disk(-r))
else:
obj = closing(obj, disk(r))
test_prediction[obj] = idobj
plt.figure()
plt.imshow(test_prediction)
plt.show()
###Output
18
###Markdown
Implementing GlaS challenge detection metric(Adapted from Matlab code released by the challenge organizers: https://warwick.ac.uk/fac/cross_fac/tia/data/glascontest/evaluation/)Quoting the challenge website: "The ground truth for each segmented object is the object in the manual annotation that has maximum overlap with that segmented object.A segmented glandular object that intersects with at least 50% of its ground truth will be considered as true positive, otherwise it will be considered as false positive. A ground truth glandular object that has no corresponding segmented object or has less than 50% of its area overlapped by its corresponding segmented object will be considered as false negative."**To keep a bit more information, we make the function return the precision & recall rather than the F1-score.**
###Code
gt_labels = test_anno.copy()
pred_labels = test_prediction.copy()
# Get unique labels in prediction and ground truth:
trueLabels = np.unique(gt_labels)
trueLabels = trueLabels[trueLabels>0].astype('int')
predLabels = np.unique(pred_labels)
predLabels = predLabels[predLabels>0].astype('int')
print(trueLabels)
print(predLabels)
from scipy.stats import mode
# Find best matches for each segmented object:
best_matches = np.zeros((len(predLabels),3)) # predLabel, gtLabel, isValidMatch
best_matches[:,0] = predLabels
for i in range(len(predLabels)):
predObject = pred_labels==predLabels[i] # select predicted object
corrRegionInGT = gt_labels[predObject] # find region in gt image
if corrRegionInGT.max() > 0: # if it's only background, there's no match
bestMatch = mode(corrRegionInGT[corrRegionInGT>0])[0][0] # mode of the region = object with largest overlap
matchInGT = gt_labels==bestMatch # Select GT object
best_matches[i,1] = bestMatch
overlap = predObject*matchInGT # Select overlapping region
best_matches[i,2] = (overlap.sum()/matchInGT.sum())>0.5 # if #overlapping pixels > 50% GT object pixels : valid
print(best_matches)
# Let's visually check that it's ok:
check_image = np.zeros(gt_labels.shape+(3,))
for match in best_matches:
if( match[2]==1. ):
check_image[pred_labels==match[1]] = np.array([0,1.,0]) # green = TP
else:
check_image[pred_labels==match[0]] = np.array([1.,0,0]) # red = FP
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
plt.imshow(gt_labels)
plt.subplot(1,2,2)
plt.imshow(check_image)
plt.show()
# Compute TP/FP/FN
TP = int(best_matches[:,2].sum())
FP = int((best_matches[:,2]==0).sum())
FN = int(len(trueLabels)-TP)
print(TP, FP, FN)
# Compute precision & recall
precision = TP/(TP+FP)
recall = TP/(TP+FN)
print(f'Precision:\t{precision:.3f}')
print(f'Recall:\t\t{recall:.3f}')
###Output
Precision: 0.846
Recall: 0.688
###Markdown
Implementing MCC segmentation metric
###Code
# Compute segmentation masks
gt_mask = gt_labels>0
pred_mask = pred_labels>0
plt.figure()
plt.subplot(1,2,1)
plt.imshow(gt_mask)
plt.subplot(1,2,2)
plt.imshow(pred_mask)
plt.show()
# Pixel-level TP, FP, FN, TN
TP = float(((gt_mask==True)*(pred_mask==True)).sum())
FP = float(((gt_mask==False)*(pred_mask==True)).sum())
FN = float(((gt_mask==True)*(pred_mask==False)).sum())
TN = float(((gt_mask==False)*(pred_mask==False)).sum())
# MCC
MCC = ((TP*TN)-(FP*FN))/np.sqrt((TP+FP)*(TP+FN)*(TN+FP)*(TN+FN))
print(f'MCC:\t{MCC:.3f}')
###Output
MCC: 0.667
|
master/tutorial_1_imagens.ipynb | ###Markdown
Processamento de imagens usando fatiamento do NumpyUma introdução sobre como representar, ler e exibir imagens no Adessowiki podem ser vista em: - `master:tutorial_img_ds Representação, Leitura e Visualização de Imagens no Adessowiki`.O conceito de fatiamento (slicing) do Numpy é um dos mais importantes para processamento de imagens, tantopela sua versatilidade como pela sua eficiência. Reunimos nesta página um conjunto de processamento deimagens utilizando quase que exclusivamente operações de fatiamento.Para entender melhor como o fatiamento funciona, recomenda-se ver uma explicação didática do fatiamento:- `tutorial_numpy_1_2 Fatiamentos unidimensionais`- `tutorial_numpy_1_3 Fatiamentos bidimensionais` Sobrepondo reticulado
###Code
f = adreadgray('beef.tif')
adshow(f, 'original')
f[::10,:] = 255 # linhas horizontais
f[:,::10] = 255 # linhas verticais
adshow(f, 'com reticulado')
###Output
_____no_output_____
###Markdown
Sobrepondo frame preto na imagem
###Code
f = adreadgray('leaf.tif')
adshow(f, 'original')
f[ :10, : ] = 0 # frame superior
f[-10: , : ] = 0 # frame inferior
f[ : , :10] = 0 # frame esquerdo
f[ : ,-10: ] = 0 # frame direito
adshow(f, 'com frame de 10 pixels de espessura')
###Output
_____no_output_____
###Markdown
Rotação 90 grausUma técnica simples para se fazer uma rotação antihorária da matriz, é calcular a matriz transposta e depoisrefleti-la na vertical:
###Code
f= adreadgray('cameraman.tif')[:,64:192]
adshow(f, 'original shape=%s' % (f.shape,))
g = f.transpose()
adshow(g, 'transposta shape=%s' % (g.shape,))
adshow(g[::-1,:], 'reflete na vertical')
###Output
_____no_output_____
###Markdown
Subamostragem f = adreadgray('cameraman.tif') adshow(f, 'shape=%s' % (f.shape,) ) g = f[::2,::2] adshow(g, 'shape=%s' % (g.shape,) ) Ampliação import numpy as np f = adreadgray('gear.tif') adshow(f, 'original %s' % (f.shape,) ) H,W = f.shape g = np.zeros( (2*H,2*W), 'uint8') g[ ::2, ::2] = f g[1::2, ::2] = f g[1::2,1::2] = f g[ ::2,1::2] = f adshow(g, 'ampliada por replicação %s' % (g.shape,) ) Separando campos pares e impares entrelaçados
###Code
f = adreadgray('tvframe.pgm')
adshow(f, 'original com dois campos')
g_even = np.zeros_like(f)
g_even[::2] = f[::2]
g_even[1::2] = f[::2]
adshow(g_even, 'campo linhas pares')
g_odd = zeros_like(f)
g_odd[::2] = f[1::2]
g_odd[1::2] = f[1::2]
adshow(g_odd, 'campo linhas ímpares')
###Output
_____no_output_____
###Markdown
Combinando duas imagens linhas pares de uma e ímpares de outra
###Code
f1 = adreadgray('bloodcells.tif')
f2 = adreadgray('mribrain.tif')
adshow(f1, 'f1: bloodcells')
adshow(f2, 'f2: mribrain')
g = np.array(f1)
g[::2] = f2[::2]
adshow(g, 'linhas ímpares de f1 e pares de f2')
###Output
_____no_output_____
###Markdown
Montagem e reflexão vertical e horizontal
###Code
f = adreadgray('unilogo.tif')
adshow(f, 'original')
H,W = f.shape
g = np.zeros( (2*H,2*W), 'uint8')
g[:H,:W] = f # original no quadrante superior esquerdo
g[H:,:W] = f[::-1,:] # refletida vertical no quadrante inferior esquerdo
g[:H,W:] = f[:,::-1] # refletida horizontal no quadrante superior direito
g[H:,W:] = f[::-1,::-1] # refletida vert. e hor. no quadrante inferior direito
adshow(g, 'refletidas')
###Output
_____no_output_____ |
pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | ###Markdown
Run in Colab View on GitHub OverviewAI Platform Online Prediction now supports custom python code in to apply custom prediction routines, including custom (stateful) pre/post processing, and/or models not created by the standard supported frameworks (TensorFlow, Keras, Scikit-learn, XGBoost). DatasetWe use the [Iris dataset](https://archive.ics.uci.edu/ml/datasets/Iris) ObjectiveIn this notebook, we show how to deploy a model created by [PyTorch](https://pytorch.org/) using AI Platform Custom Prediction Code using Iris dataset for a multi-class classification problem. Costs This tutorial uses billable components of Google Cloud Platform (GCP):* Cloud AI Platform* Cloud StorageLearn about [Cloud AI Platformpricing](https://cloud.google.com/ml-engine/docs/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or AI Platform Notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)2. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)3. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3.4. Activate that environment and run `pip install jupyter` in a shell to install Jupyter.5. Run `jupyter notebook` in a shell to launch Jupyter.6. Open this notebook in the Jupyter Notebook Dashboard. Set up your GCP project**The following steps are required, regardless of your notebook environment.**1. [Select or create a GCP project.](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the AI Platform APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Authenticate your GCP account**If you are using AI Platform Notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the GCP Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. From the **Service account** drop-down list, select **New service account**.3. In the **Service account name** field, enter a name.4. From the **Role** drop-down list, select **Machine Learning Engine > AI Platform Admin** and **Storage > Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
PIP Install Packages and dependenciesBefore we start let's install pytorch and gcloud
###Code
!pip install torch --user
###Output
_____no_output_____
###Markdown
If you are running this notebook in Colab, run the following cell to authenticate your Google Cloud Platform user account
###Code
PROJECT = '' # TODO (Set to your GCP Project name)
BUCKET = '' # TODO (Set to your GCS Bucket name)
!gcloud config set project {PROJECT}
!gcloud config get-value project
###Output
_____no_output_____
###Markdown
3. Download iris dataIn this example, we want to build a classifier for the simple [iris dataset](https://archive.ics.uci.edu/ml/datasets/iris). So first, we download the data csv file locally.
###Code
!mkdir data
!mkdir models
LOCAL_DATA_DIR = "data/iris.csv"
from urllib.request import urlretrieve
urlretrieve("https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data", LOCAL_DATA_DIR)
###Output
_____no_output_____
###Markdown
Part 1: Build a PyTorch NN ClassifierMake sure that pytorch package is [installed](https://pytorch.org/get-started/locally/).
###Code
import torch
from torch.autograd import Variable
print('PyTorch Version: {}'.format(torch.__version__))
###Output
_____no_output_____
###Markdown
1. Load Data In this step, we are going to:1. Load the data to Pandas Dataframe.2. Convert the class feature (species) from string to a numeric indicator.3. Split the Dataframe into input feature (xtrain) and target feature (ytrain).
###Code
import pandas as pd
CLASS_VOCAB = ['setosa', 'versicolor', 'virginica']
datatrain = pd.read_csv(LOCAL_DATA_DIR, names=['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'])
#change string value to numeric
datatrain.loc[datatrain['species']=='Iris-setosa', 'species']=0
datatrain.loc[datatrain['species']=='Iris-versicolor', 'species']=1
datatrain.loc[datatrain['species']=='Iris-virginica', 'species']=2
datatrain = datatrain.apply(pd.to_numeric)
#change dataframe to array
datatrain_array = datatrain.as_matrix()
#split x and y (feature and target)
xtrain = datatrain_array[:,:4]
ytrain = datatrain_array[:,4]
input_features = xtrain.shape[1]
num_classes = len(CLASS_VOCAB)
print('Records loaded: {}'.format(len(xtrain)))
print('Number of input features: {}'.format(input_features))
print('Number of classes: {}'.format(num_classes))
###Output
_____no_output_____
###Markdown
2. Set model parametersYou can try different values for **hidden_units** or **learning_rate**.
###Code
HIDDEN_UNITS = 10
LEARNING_RATE = 0.1
###Output
_____no_output_____
###Markdown
3. Define the PyTorch NN modelHere, we build a a neural network with one hidden layer, and a Softmax output layer for classification.
###Code
model = torch.nn.Sequential(
torch.nn.Linear(input_features, HIDDEN_UNITS),
torch.nn.Sigmoid(),
torch.nn.Linear(HIDDEN_UNITS, num_classes),
torch.nn.Softmax()
)
loss_metric = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=LEARNING_RATE)
###Output
_____no_output_____
###Markdown
4. Train the modelWe are going to train the model for **num_epoch** epochs.
###Code
NUM_EPOCHS = 10000
for epoch in range(NUM_EPOCHS):
x = Variable(torch.Tensor(xtrain).float())
y = Variable(torch.Tensor(ytrain).long())
optimizer.zero_grad()
y_pred = model(x)
loss = loss_metric(y_pred, y)
loss.backward()
optimizer.step()
if (epoch) % 1000 == 0:
print('Epoch [{}/{}] Loss: {}'.format(epoch+1, NUM_EPOCHS, round(loss.item(),3)))
print('Epoch [{}/{}] Loss: {}'.format(epoch+1, NUM_EPOCHS, round(loss.item(),3)))
###Output
_____no_output_____
###Markdown
5. Save and load the model
###Code
LOCAL_MODEL_DIR = "models/model.pt"
torch.save(model, LOCAL_MODEL_DIR)
iris_classifier = torch.load(LOCAL_MODEL_DIR)
###Output
_____no_output_____
###Markdown
6. Test the loaded model for predictions
###Code
def predict_class(instances):
instances = torch.Tensor(instances)
output = iris_classifier(instances)
_ , predicted = torch.max(output, 1)
return predicted
###Output
_____no_output_____
###Markdown
Get predictions for the first 5 instances in the dataset
###Code
predicted = predict_class(xtrain[0:5])
print([CLASS_VOCAB[class_index] for class_index in predicted])
###Output
_____no_output_____
###Markdown
Get the classification accuracy on the training data
###Code
import numpy as np
accuracy = round(sum(np.array(predict_class(xtrain)) == ytrain)/float(len(ytrain))*100,2)
print('Classification accuracy: {} %'.format(accuracy))
###Output
_____no_output_____
###Markdown
7. Upload trained model to Cloud Storage
###Code
GCS_MODEL_DIR='models/pytorch/iris_classifier/'
!gsutil -m cp -r {LOCAL_MODEL_DIR} gs://{BUCKET}/{GCS_MODEL_DIR}
!gsutil ls gs://{BUCKET}/{GCS_MODEL_DIR}
###Output
_____no_output_____
###Markdown
Part 2: Prepare the Custom Prediction Package1. Implement a model **custom class** for pre/post processing, as well as loading and using your model for prediction.2. Prepare yout **setup.py** file, to include all the modules and packages you need in your custome model class. 1. Create the custom model classIn the **from_path**, you load the pytorch model that you uploaded to GCS. Then in the **predict** method, you use it for prediction.
###Code
%%writefile model.py
import os
import pandas as pd
from google.cloud import storage
import torch
class PyTorchIrisClassifier(object):
def __init__(self, model):
self._model = model
self.class_vocab = ['setosa', 'versicolor', 'virginica']
@classmethod
def from_path(cls, model_dir):
model_file = os.path.join(model_dir,'model.pt')
model = torch.load(model_file)
return cls(model)
def predict(self, instances, **kwargs):
data = pd.DataFrame(instances).as_matrix()
inputs = torch.Tensor(data)
outputs = self._model(inputs)
_ , predicted = torch.max(outputs, 1)
return [self.class_vocab[class_index] for class_index in predicted]
###Output
_____no_output_____
###Markdown
2. Create a setup.py moduleDo not include **pytorch** as a required package, as well as the **model.py** file that includes your custom model class. We will include it when creating the model below.
###Code
%%writefile setup.py
from setuptools import setup
REQUIRED_PACKAGES = []
setup(
name="iris-custom-model",
version="0.1",
scripts=["model.py"],
install_requires=REQUIRED_PACKAGES
)
###Output
_____no_output_____
###Markdown
3. Create the package This will create a .tar.gz package under /dist directory. The name of the package will be (name)-(version).tar.gz where (name) and (version) are the ones specified in the setup.py.
###Code
!python setup.py sdist
###Output
_____no_output_____
###Markdown
4. Uploaded the package to GCS
###Code
GCS_PACKAGE_URI='models/pytorch/packages/iris-custom-model-0.1.tar.gz'
!gsutil cp ./dist/iris-custom-model-0.1.tar.gz gs://{BUCKET}/{GCS_PACKAGE_URI}
!gsutil ls gs://{BUCKET}/{GCS_PACKAGE_URI}
###Output
_____no_output_____
###Markdown
Part 3: Deploy the Model to AI Platform for Online Predictions 1. Create AI Platform model
###Code
MODEL_NAME='torch_iris_classifier'
REGION = 'us-central1'
# You can uncomment to enable logging
!gcloud ai-platform models create {MODEL_NAME} --regions {REGION} #--enable-logging --enable-console-logging
!gcloud ai-platform models list | grep 'torch'
###Output
_____no_output_____
###Markdown
2. Create AI Platform model versionOnce you have your custom package ready, you can specify this as an argument when creating a version resource. Note that you need to provide the path to your package (as package-uris) and also the class name that contains your custom predict method (as model-class). Pytorch compatible packages You need to use compiled packages compatible with Cloud AI Platform Package information hereThis bucket containers compiled packages for PyTorch that are compatible with Cloud AI Platform prediction. The files are mirroed from the official builds at https://download.pytorch.org/whl/cpu/torch_stable.htmlIn order to deploy a PyTorch model on Cloud AI Platform Online Predictions, you must add one of these packages to the packageURIs field on the version you deploy. Pick the package matching your Python and PyTorch version. The package names follow this template:Package name = torch-{TORCH_VERSION_NUMBER}-{PYTHON_VERSION}-linux_x86_64.whl where PYTHON_VERSION = cp35-cp35m for Python 3 with runtime versions = 1.15Use cp27-cp27mu for Python 2.For example, if I were to deploy a PyTorch model based on PyTorch 1.1.0 and Python 3, my gcloud command would look like:gcloud beta ai-platform versions create {VERSION_NAME} --model {MODEL_NAME} \...--package-uris=gs://{MY_PACKAGE_BUCKET}/my_package-0.1.tar.gz,gs://cloud-ai-pytorch/torch-1.1.0-cp35-cp35m-linux_x86_64.whl
###Code
MODEL_VERSION='v3'
RUNTIME_VERSION='1.15'
MODEL_CLASS='model.PyTorchIrisClassifier'
!gcloud beta ai-platform versions create {MODEL_VERSION} --model={MODEL_NAME} \
--origin=gs://{BUCKET}/{GCS_MODEL_DIR} \
--python-version=3.7 \
--runtime-version={RUNTIME_VERSION} \
--machine-type=mls1-c4-m4 \
--package-uris=gs://{BUCKET}/{GCS_PACKAGE_URI},gs://cloud-ai-pytorch/torch-1.3.1+cpu-cp37-cp37m-linux_x86_64.whl \
--prediction-class={MODEL_CLASS}
!gcloud ai-platform versions list --model {MODEL_NAME}
###Output
_____no_output_____
###Markdown
Part 4: AI Platform Online Prediction
###Code
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
def estimate(project, model_name, version, instances):
request_data = {'instances': instances}
model_url = 'projects/{}/models/{}/versions/{}'.format(project, model_name, version)
response = api.projects().predict(body=request_data, name=model_url).execute()
#print response
predictions = response["predictions"]
return predictions
instances = [
[6.8, 2.8, 4.8, 1.4],
[6. , 3.4, 4.5, 1.6]
]
predictions = estimate(instances=instances
,project=PROJECT
,model_name=MODEL_NAME
,version=MODEL_VERSION)
print(predictions)
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub OverviewAI Platform Online Prediction now supports custom python code in to apply custom prediction routines, including custom (stateful) pre/post processing, and/or models not created by the standard supported frameworks (TensorFlow, Keras, Scikit-learn, XGBoost). DatasetWe use the [Iris dataset](https://archive.ics.uci.edu/ml/datasets/Iris) ObjectiveIn this notebook, we show how to deploy a model created by [PyTorch](https://pytorch.org/) using AI Platform Custom Prediction Code using Iris dataset for a multi-class classification problem. Costs This tutorial uses billable components of Google Cloud Platform (GCP):* Cloud AI Platform* Cloud StorageLearn about [Cloud AI Platformpricing](https://cloud.google.com/ml-engine/docs/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or AI Platform Notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)2. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)3. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3.4. Activate that environment and run `pip install jupyter` in a shell to install Jupyter.5. Run `jupyter notebook` in a shell to launch Jupyter.6. Open this notebook in the Jupyter Notebook Dashboard. Set up your GCP project**The following steps are required, regardless of your notebook environment.**1. [Select or create a GCP project.](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the AI Platform APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Authenticate your GCP account**If you are using AI Platform Notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the GCP Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. From the **Service account** drop-down list, select **New service account**.3. In the **Service account name** field, enter a name.4. From the **Role** drop-down list, select **Machine Learning Engine > AI Platform Admin** and **Storage > Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
PIP Install Packages and dependenciesBefore we start let's install pytorch and gcloud
###Code
!pip install torch --user
###Output
_____no_output_____
###Markdown
If you are running this notebook in Colab, run the following cell to authenticate your Google Cloud Platform user account
###Code
PROJECT = '' # TODO (Set to your GCP Project name)
BUCKET = '' # TODO (Set to your GCS Bucket name)
!gcloud config set project {PROJECT}
!gcloud config get-value project
###Output
_____no_output_____
###Markdown
3. Download iris dataIn this example, we want to build a classifier for the simple [iris dataset](https://archive.ics.uci.edu/ml/datasets/iris). So first, we download the data csv file locally.
###Code
!mkdir data
!mkdir models
LOCAL_DATA_DIR = "data/iris.csv"
from urllib.request import urlretrieve
urlretrieve("https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data", LOCAL_DATA_DIR)
###Output
_____no_output_____
###Markdown
Part 1: Build a PyTorch NN ClassifierMake sure that pytorch package is [installed](https://pytorch.org/get-started/locally/).
###Code
import torch
from torch.autograd import Variable
print('PyTorch Version: {}'.format(torch.__version__))
###Output
_____no_output_____
###Markdown
1. Load Data In this step, we are going to:1. Load the data to Pandas Dataframe.2. Convert the class feature (species) from string to a numeric indicator.3. Split the Dataframe into input feature (xtrain) and target feature (ytrain).
###Code
import pandas as pd
CLASS_VOCAB = ['setosa', 'versicolor', 'virginica']
datatrain = pd.read_csv(LOCAL_DATA_DIR, names=['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'])
#change string value to numeric
datatrain.loc[datatrain['species']=='Iris-setosa', 'species']=0
datatrain.loc[datatrain['species']=='Iris-versicolor', 'species']=1
datatrain.loc[datatrain['species']=='Iris-virginica', 'species']=2
datatrain = datatrain.apply(pd.to_numeric)
#change dataframe to array
datatrain_array = datatrain.as_matrix()
#split x and y (feature and target)
xtrain = datatrain_array[:,:4]
ytrain = datatrain_array[:,4]
input_features = xtrain.shape[1]
num_classes = len(CLASS_VOCAB)
print('Records loaded: {}'.format(len(xtrain)))
print('Number of input features: {}'.format(input_features))
print('Number of classes: {}'.format(num_classes))
###Output
_____no_output_____
###Markdown
2. Set model parametersYou can try different values for **hidden_units** or **learning_rate**.
###Code
HIDDEN_UNITS = 10
LEARNING_RATE = 0.1
###Output
_____no_output_____
###Markdown
3. Define the PyTorch NN modelHere, we build a a neural network with one hidden layer, and a Softmax output layer for classification.
###Code
model = torch.nn.Sequential(
torch.nn.Linear(input_features, HIDDEN_UNITS),
torch.nn.Sigmoid(),
torch.nn.Linear(HIDDEN_UNITS, num_classes),
torch.nn.Softmax()
)
loss_metric = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=LEARNING_RATE)
###Output
_____no_output_____
###Markdown
4. Train the modelWe are going to train the model for **num_epoch** epochs.
###Code
NUM_EPOCHS = 10000
for epoch in range(NUM_EPOCHS):
x = Variable(torch.Tensor(xtrain).float())
y = Variable(torch.Tensor(ytrain).long())
optimizer.zero_grad()
y_pred = model(x)
loss = loss_metric(y_pred, y)
loss.backward()
optimizer.step()
if (epoch) % 1000 == 0:
print('Epoch [{}/{}] Loss: {}'.format(epoch+1, NUM_EPOCHS, round(loss.item(),3)))
print('Epoch [{}/{}] Loss: {}'.format(epoch+1, NUM_EPOCHS, round(loss.item(),3)))
###Output
_____no_output_____
###Markdown
5. Save and load the model
###Code
LOCAL_MODEL_DIR = "models/model.pt"
torch.save(model, LOCAL_MODEL_DIR)
iris_classifier = torch.load(LOCAL_MODEL_DIR)
###Output
_____no_output_____
###Markdown
6. Test the loaded model for predictions
###Code
def predict_class(instances):
instances = torch.Tensor(instances)
output = iris_classifier(instances)
_ , predicted = torch.max(output, 1)
return predicted
###Output
_____no_output_____
###Markdown
Get predictions for the first 5 instances in the dataset
###Code
predicted = predict_class(xtrain[0:5])
print([CLASS_VOCAB[class_index] for class_index in predicted])
###Output
_____no_output_____
###Markdown
Get the classification accuracy on the training data
###Code
import numpy as np
accuracy = round(sum(np.array(predict_class(xtrain)) == ytrain)/float(len(ytrain))*100,2)
print('Classification accuracy: {} %'.format(accuracy))
###Output
_____no_output_____
###Markdown
7. Upload trained model to Cloud Storage
###Code
GCS_MODEL_DIR='models/pytorch/iris_classifier/'
!gsutil -m cp -r {LOCAL_MODEL_DIR} gs://{BUCKET}/{GCS_MODEL_DIR}
!gsutil ls gs://{BUCKET}/{GCS_MODEL_DIR}
###Output
_____no_output_____
###Markdown
Part 2: Prepare the Custom Prediction Package1. Implement a model **custom class** for pre/post processing, as well as loading and using your model for prediction.2. Prepare yout **setup.py** file, to include all the modules and packages you need in your custome model class. 1. Create the custom model classIn the **from_path**, you load the pytorch model that you uploaded to GCS. Then in the **predict** method, you use it for prediction.
###Code
%%writefile model.py
import os
import pandas as pd
from google.cloud import storage
import torch
class PyTorchIrisClassifier(object):
def __init__(self, model):
self._model = model
self.class_vocab = ['setosa', 'versicolor', 'virginica']
@classmethod
def from_path(cls, model_dir):
model_file = os.path.join(model_dir,'model.pt')
model = torch.load(model_file)
return cls(model)
def predict(self, instances, **kwargs):
data = pd.DataFrame(instances).as_matrix()
inputs = torch.Tensor(data)
outputs = self._model(inputs)
_ , predicted = torch.max(outputs, 1)
return [self.class_vocab[class_index] for class_index in predicted]
###Output
_____no_output_____
###Markdown
2. Create a setup.py moduleDo not include **pytorch** as a required package, as well as the **model.py** file that includes your custom model class. We will include it when creating the model below.
###Code
%%writefile setup.py
from setuptools import setup
REQUIRED_PACKAGES = []
setup(
name="iris-custom-model",
version="0.1",
scripts=["model.py"],
install_requires=REQUIRED_PACKAGES
)
###Output
_____no_output_____
###Markdown
3. Create the package This will create a .tar.gz package under /dist directory. The name of the package will be (name)-(version).tar.gz where (name) and (version) are the ones specified in the setup.py.
###Code
!python setup.py sdist
###Output
_____no_output_____
###Markdown
4. Uploaded the package to GCS
###Code
GCS_PACKAGE_URI='models/pytorch/packages/iris-custom-model-0.1.tar.gz'
!gsutil cp ./dist/iris-custom-model-0.1.tar.gz gs://{BUCKET}/{GCS_PACKAGE_URI}
!gsutil ls gs://{BUCKET}/{GCS_PACKAGE_URI}
###Output
_____no_output_____
###Markdown
Part 3: Deploy the Model to AI Platform for Online Predictions 1. Create AI Platform model
###Code
MODEL_NAME='torch_iris_classifier'
REGION = 'us-central1'
# You can uncomment to enable logging
!gcloud ai-platform models create {MODEL_NAME} --regions {REGION} #--enable-logging --enable-console-logging
!gcloud ai-platform models list | grep 'torch'
###Output
_____no_output_____
###Markdown
2. Create AI Platform model versionOnce you have your custom package ready, you can specify this as an argument when creating a version resource. Note that you need to provide the path to your package (as package-uris) and also the class name that contains your custom predict method (as model-class). Pytorch compatible packages You need to use compiled packages compatible with Cloud AI Platform Package information hereThis bucket containers compiled packages for PyTorch that are compatible with Cloud AI Platform prediction. The files are mirroed from the official builds at https://download.pytorch.org/whl/cpu/torch_stable.htmlIn order to deploy a PyTorch model on Cloud AI Platform Online Predictions, you must add one of these packages to the packageURIs field on the version you deploy. Pick the package matching your Python and PyTorch version. The package names follow this template:Package name = torch-{TORCH_VERSION_NUMBER}-{PYTHON_VERSION}-linux_x86_64.whl where PYTHON_VERSION = cp35-cp35m for Python 3 with runtime versions = 1.15Use cp27-cp27mu for Python 2.For example, if I were to deploy a PyTorch model based on PyTorch 1.1.0 and Python 3, my gcloud command would look like:gcloud beta ai-platform versions create {VERSION_NAME} --model {MODEL_NAME} \...--package-uris=gs://{MY_PACKAGE_BUCKET}/my_package-0.1.tar.gz,gs://cloud-ai-pytorch/torch-1.1.0-cp35-cp35m-linux_x86_64.whl
###Code
MODEL_VERSION='v3'
RUNTIME_VERSION='1.15'
MODEL_CLASS='model.PyTorchIrisClassifier'
!gcloud beta ai-platform versions create {MODEL_VERSION} --model={MODEL_NAME} \
--origin=gs://{BUCKET}/{GCS_MODEL_DIR} \
--python-version=3.7 \
--runtime-version={RUNTIME_VERSION} \
--machine-type=mls1-c4-m4 \
--package-uris=gs://{BUCKET}/{GCS_PACKAGE_URI},gs://cloud-ai-pytorch/torch-1.3.1+cpu-cp37-cp37m-linux_x86_64.whl \
--prediction-class={MODEL_CLASS}
!gcloud ai-platform versions list --model {MODEL_NAME}
###Output
_____no_output_____
###Markdown
Part 4: AI Platform Online Prediction
###Code
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
def estimate(project, model_name, version, instances):
request_data = {'instances': instances}
model_url = 'projects/{}/models/{}/versions/{}'.format(project, model_name, version)
response = api.projects().predict(body=request_data, name=model_url).execute()
#print response
predictions = response["predictions"]
return predictions
instances = [
[6.8, 2.8, 4.8, 1.4],
[6. , 3.4, 4.5, 1.6]
]
predictions = estimate(instances=instances
,project=PROJECT
,model_name=MODEL_NAME
,version=MODEL_VERSION)
print(predictions)
###Output
_____no_output_____
###Markdown
Serving PyTorch Models with CMLE Custom Prediction CodeCloud ML Engine Online Prediction now supports custom python code in to apply custom prediction routines, including custom (stateful) pre/post processing, and/or models not created by the standard supported frameworks (TensorFlow, Keras, Scikit-learn, XGBoost).In this notebook, we show how to deploy a model created by [PyTorch](https://pytorch.org/) using CMLE Custom Prediction Code**Note**: You must be whitelisted to use the custom code feature. Please fill out [this google form](https://docs.google.com/forms/d/e/1FAIpQLSc6fxgXQIyA6BDLfCKOJPu5CyCuOB_M_rGTws0629od5mlznw/viewform) to get started. Setup 1. Preparing your GCP project* [Create a project on GCP](https://cloud.google.com/resource-manager/docs/creating-managing-projects)* [Create a Google Cloud Storage Bucket](https://cloud.google.com/storage/docs/quickstart-console)* [Enable Cloud Machine Learning Engine and Compute Engine APIs](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component&_ga=2.217405014.1312742076.1516128282-1417583630.1516128282) 2. Preparing your local environment* [Install Cloud SDK](https://cloud.google.com/sdk/downloads)Before we start let's install pytorch and gcloud
###Code
!pip install -U google-cloud
!pip install torch
###Output
_____no_output_____
###Markdown
If you are running this notebook in Colab, run the following cell to authenticate your Google Cloud Platform user account
###Code
from google.colab import auth
auth.authenticate_user()
###Output
_____no_output_____
###Markdown
Let's also define the project name, model name, the GCS bucket name that we'll refer to later. Replace ****, ****, and **** with your GCP project ID, your bucket name, and your region, respectively.
###Code
PROJECT='<YOUR_PROJECT_ID>'
BUCKET='<YOUR_BUCKET_NAME>'
REGION='<YOUR_REGION>'
!gcloud config set project {PROJECT}
!gcloud config get-value project
###Output
_____no_output_____
###Markdown
3. Download iris dataIn this example, we want to build a classifier for the simple [iris dataset](https://archive.ics.uci.edu/ml/datasets/iris). So first, we download the data csv file locally.
###Code
!mkdir data
!mkdir models
import urllib
LOCAL_DATA_DIR = "data/iris.csv"
url_opener = urllib.URLopener()
url_opener.retrieve("https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data", LOCAL_DATA_DIR)
###Output
_____no_output_____
###Markdown
Part 1: Build a PyTorch NN ClassifierMake sure that pytorch package is [installed](https://pytorch.org/get-started/locally/).
###Code
import torch
from torch.autograd import Variable
print 'PyTorch Version: {}'.format(torch.__version__)
###Output
_____no_output_____
###Markdown
1. Load Data In this step, we are going to:1. Load the data to Pandas Dataframe.2. Convert the class feature (species) from string to a numeric indicator.3. Split the Dataframe into input feature (xtrain) and target feature (ytrain).
###Code
import pandas as pd
CLASS_VOCAB = ['setosa', 'versicolor', 'virginica']
datatrain = pd.read_csv(LOCAL_DATA_DIR, names=['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'])
#change string value to numeric
datatrain.loc[datatrain['species']=='Iris-setosa', 'species']=0
datatrain.loc[datatrain['species']=='Iris-versicolor', 'species']=1
datatrain.loc[datatrain['species']=='Iris-virginica', 'species']=2
datatrain = datatrain.apply(pd.to_numeric)
#change dataframe to array
datatrain_array = datatrain.as_matrix()
#split x and y (feature and target)
xtrain = datatrain_array[:,:4]
ytrain = datatrain_array[:,4]
input_features = xtrain.shape[1]
num_classes = len(CLASS_VOCAB)
print 'Records loaded: {}'.format(len(xtrain))
print 'Number of input features: {}'.format(input_features)
print 'Number of classes: {}'.format(num_classes)
###Output
_____no_output_____
###Markdown
2. Set model parametersYou can try different values for **hidden_units** or **learning_rate**.
###Code
hidden_units = 10
learning_rate = 0.1
###Output
_____no_output_____
###Markdown
3. Define the PyTorch NN modelHere, we build a a neural network with one hidden layer, and a Softmax output layer for classification.
###Code
model = torch.nn.Sequential(
torch.nn.Linear(input_features, hidden_units),
torch.nn.Sigmoid(),
torch.nn.Linear(hidden_units, num_classes),
torch.nn.Softmax()
)
loss_metric = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=learning_rate)
###Output
_____no_output_____
###Markdown
4. Train the modelWe are going to train the model for **num_epoch** epochs.
###Code
num_epochs = 10000
for epoch in range(num_epochs):
x = Variable(torch.Tensor(xtrain).float())
y = Variable(torch.Tensor(ytrain).long())
optimizer.zero_grad()
y_pred = model(x)
loss = loss_metric(y_pred, y)
loss.backward()
optimizer.step()
if (epoch) % 1000 == 0:
print 'Epoch [{}/{}] Loss: {}'.format(epoch+1, num_epochs, round(loss.item(),3))
print 'Epoch [{}/{}] Loss: {}'.format(epoch+1, num_epochs, round(loss.item(),3))
###Output
_____no_output_____
###Markdown
5. Save and load the model
###Code
LOCAL_MODEL_DIR = "models/model.pt"
del model
torch.save(model, LOCAL_MODEL_DIR)
iris_classifier = torch.load(LOCAL_MODEL_DIR)
###Output
_____no_output_____
###Markdown
6. Test the loaded model for predictions
###Code
def predict_class(instances):
instances = torch.Tensor(instances)
output = iris_classifier(instances)
_ , predicted = torch.max(output, 1)
return predicted
###Output
_____no_output_____
###Markdown
Get predictions for the first 5 instances in the dataset
###Code
predicted = predict_class(xtrain[0:5])
print[CLASS_VOCAB[class_index] for class_index in predicted]
###Output
_____no_output_____
###Markdown
Get the classification accuracy on the training data
###Code
import numpy as np
accuracy = round(sum(np.array(predict_class(xtrain)) == ytrain)/float(len(ytrain))*100,2)
print 'Classification accuracy: {}%'.format(accuracy)
###Output
_____no_output_____
###Markdown
7. Upload trained model to Cloud Storage
###Code
GCS_MODEL_DIR='models/pytorch/iris_classifier/'
!gsutil -m cp -r {LOCAL_MODEL_DIR} gs://{BUCKET}/{GCS_MODEL_DIR}
!gsutil ls gs://{BUCKET}/{GCS_MODEL_DIR}
###Output
_____no_output_____
###Markdown
Part 2: Prepare the Custom Prediction Package1. Implement a model **custom class** for pre/post processing, as well as loading and using your model for prediction.2. Prepare yout **setup.py** file, to include all the modules and packages you need in your custome model class. 1. Create the custom model classIn the **from_path**, you load the pytorch model that you uploaded to GCS. Then in the **predict** method, you use it for prediction.
###Code
%%writefile model.py
import os
import pandas as pd
from google.cloud import storage
import torch
class PyTorchIrisClassifier(object):
def __init__(self, model):
self._model = model
self.class_vocab = ['setosa', 'versicolor', 'virginica']
@classmethod
def from_path(cls, model_dir):
model_file = os.path.join(model_dir,'model.pt')
model = torch.load(model_file)
return cls(model)
def predict(self, instances, **kwargs):
data = pd.DataFrame(instances).as_matrix()
inputs = torch.Tensor(data)
outputs = self._model(inputs)
_ , predicted = torch.max(outputs, 1)
return [self.class_vocab[class_index] for class_index in predicted]
###Output
_____no_output_____
###Markdown
2. Create a setup.py moduleInclude **pytorch** as a required package, as well as the **model.py** file that includes your custom model class.
###Code
%%writefile setup.py
from setuptools import setup
REQUIRED_PACKAGES = ['torch']
setup(
name="iris-custom-model",
version="0.1",
scripts=["model.py"],
install_requires=REQUIRED_PACKAGES
)
###Output
_____no_output_____
###Markdown
3. Create the package This will create a .tar.gz package under /dist directory. The name of the package will be (name)-(version).tar.gz where (name) and (version) are the ones specified in the setup.py.
###Code
!python setup.py sdist
###Output
_____no_output_____
###Markdown
4. Uploaded the package to GCS
###Code
GCS_PACKAGE_URI='models/pytorch/packages/iris-custom-model-0.1.tar.gz'
!gsutil cp ./dist/iris-custom-model-0.1.tar.gz gs://{BUCKET}/{GCS_PACKAGE_URI}
!gsutil ls gs://{BUCKET}/{GCS_PACKAGE_URI}
###Output
_____no_output_____
###Markdown
Part 3: Deploy the Model to CMLE for Online Predictions 1. Create CMLE model
###Code
MODEL_NAME='torch_iris_classifier'
!gcloud ml-engine models create {MODEL_NAME} --regions {REGION}
!echo ''
!gcloud ml-engine models list | grep 'torch'
###Output
_____no_output_____
###Markdown
2. Create CMLE model versionOnce you have your custom package ready, you can specify this as an argument when creating a version resource. Note that you need to provide the path to your package (as package-uris) and also the class name that contains your custom predict method (as model-class).
###Code
MODEL_VERSION='v1'
RUNTIME_VERSION='1.10'
MODEL_CLASS='model.PyTorchIrisClassifier'
!gcloud alpha ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} \
--origin=gs://{BUCKET}/{GCS_MODEL_DIR} \
--runtime-version={RUNTIME_VERSION} \
--framework='SCIKIT_LEARN' \
--python-version=2.7 \
--package-uris=gs://{BUCKET}/{GCS_PACKAGE_URI}\
--model-class={MODEL_CLASS}
!gcloud ml-engine versions list --model {MODEL_NAME}
###Output
_____no_output_____
###Markdown
Part 4: Cloud ML Engine Online Prediction
###Code
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
def estimate(project, model_name, version, instances):
request_data = {'instances': instances}
model_url = 'projects/{}/models/{}/versions/{}'.format(project, model_name, version)
response = api.projects().predict(body=request_data, name=model_url).execute()
#print response
predictions = response["predictions"]
return predictions
instances = [
[6.8, 2.8, 4.8, 1.4],
[6. , 3.4, 4.5, 1.6]
]
predictions = estimate(instances=instances
,project=PROJECT
,model_name=MODEL_NAME
,version=MODEL_VERSION)
print(predictions)
###Output
_____no_output_____
###Markdown
Serving PyTorch Models with AI Platform Custom Prediction CodeAI Platform Online Prediction now supports custom python code in to apply custom prediction routines, including custom (stateful) pre/post processing, and/or models not created by the standard supported frameworks (TensorFlow, Keras, Scikit-learn, XGBoost).In this notebook, we show how to deploy a model created by [PyTorch](https://pytorch.org/) using AI Platform Custom Prediction Code**Note**: You must be whitelisted to use the custom code feature. Please fill out [this google form](https://docs.google.com/forms/d/e/1FAIpQLSc6fxgXQIyA6BDLfCKOJPu5CyCuOB_M_rGTws0629od5mlznw/viewform) to get started. Setup 1. Preparing your GCP project* [Create a project on GCP](https://cloud.google.com/resource-manager/docs/creating-managing-projects)* [Create a Google Cloud Storage Bucket](https://cloud.google.com/storage/docs/quickstart-console)* [Enable AI Platform Training and Prediction and Compute Engine APIs](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component&_ga=2.217405014.1312742076.1516128282-1417583630.1516128282) 2. Preparing your local environment* [Install Cloud SDK](https://cloud.google.com/sdk/downloads)Before we start let's install pytorch and gcloud
###Code
!pip install -U google-cloud
!pip install torch
###Output
_____no_output_____
###Markdown
If you are running this notebook in Colab, run the following cell to authenticate your Google Cloud Platform user account
###Code
from google.colab import auth
auth.authenticate_user()
###Output
_____no_output_____
###Markdown
Let's also define the project name, model name, the GCS bucket name that we'll refer to later. Replace ****, ****, and **** with your GCP project ID, your bucket name, and your region, respectively.
###Code
PROJECT='<YOUR_PROJECT_ID>'
BUCKET='<YOUR_BUCKET_NAME>'
REGION='<YOUR_REGION>'
!gcloud config set project {PROJECT}
!gcloud config get-value project
###Output
_____no_output_____
###Markdown
3. Download iris dataIn this example, we want to build a classifier for the simple [iris dataset](https://archive.ics.uci.edu/ml/datasets/iris). So first, we download the data csv file locally.
###Code
!mkdir data
!mkdir models
import urllib
LOCAL_DATA_DIR = "data/iris.csv"
url_opener = urllib.URLopener()
url_opener.retrieve("https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data", LOCAL_DATA_DIR)
###Output
_____no_output_____
###Markdown
Part 1: Build a PyTorch NN ClassifierMake sure that pytorch package is [installed](https://pytorch.org/get-started/locally/).
###Code
import torch
from torch.autograd import Variable
print 'PyTorch Version: {}'.format(torch.__version__)
###Output
_____no_output_____
###Markdown
1. Load Data In this step, we are going to:1. Load the data to Pandas Dataframe.2. Convert the class feature (species) from string to a numeric indicator.3. Split the Dataframe into input feature (xtrain) and target feature (ytrain).
###Code
import pandas as pd
CLASS_VOCAB = ['setosa', 'versicolor', 'virginica']
datatrain = pd.read_csv(LOCAL_DATA_DIR, names=['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'])
#change string value to numeric
datatrain.loc[datatrain['species']=='Iris-setosa', 'species']=0
datatrain.loc[datatrain['species']=='Iris-versicolor', 'species']=1
datatrain.loc[datatrain['species']=='Iris-virginica', 'species']=2
datatrain = datatrain.apply(pd.to_numeric)
#change dataframe to array
datatrain_array = datatrain.as_matrix()
#split x and y (feature and target)
xtrain = datatrain_array[:,:4]
ytrain = datatrain_array[:,4]
input_features = xtrain.shape[1]
num_classes = len(CLASS_VOCAB)
print 'Records loaded: {}'.format(len(xtrain))
print 'Number of input features: {}'.format(input_features)
print 'Number of classes: {}'.format(num_classes)
###Output
_____no_output_____
###Markdown
2. Set model parametersYou can try different values for **hidden_units** or **learning_rate**.
###Code
hidden_units = 10
learning_rate = 0.1
###Output
_____no_output_____
###Markdown
3. Define the PyTorch NN modelHere, we build a a neural network with one hidden layer, and a Softmax output layer for classification.
###Code
model = torch.nn.Sequential(
torch.nn.Linear(input_features, hidden_units),
torch.nn.Sigmoid(),
torch.nn.Linear(hidden_units, num_classes),
torch.nn.Softmax()
)
loss_metric = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=learning_rate)
###Output
_____no_output_____
###Markdown
4. Train the modelWe are going to train the model for **num_epoch** epochs.
###Code
num_epochs = 10000
for epoch in range(num_epochs):
x = Variable(torch.Tensor(xtrain).float())
y = Variable(torch.Tensor(ytrain).long())
optimizer.zero_grad()
y_pred = model(x)
loss = loss_metric(y_pred, y)
loss.backward()
optimizer.step()
if (epoch) % 1000 == 0:
print 'Epoch [{}/{}] Loss: {}'.format(epoch+1, num_epochs, round(loss.item(),3))
print 'Epoch [{}/{}] Loss: {}'.format(epoch+1, num_epochs, round(loss.item(),3))
###Output
_____no_output_____
###Markdown
5. Save and load the model
###Code
LOCAL_MODEL_DIR = "models/model.pt"
del model
torch.save(model, LOCAL_MODEL_DIR)
iris_classifier = torch.load(LOCAL_MODEL_DIR)
###Output
_____no_output_____
###Markdown
6. Test the loaded model for predictions
###Code
def predict_class(instances):
instances = torch.Tensor(instances)
output = iris_classifier(instances)
_ , predicted = torch.max(output, 1)
return predicted
###Output
_____no_output_____
###Markdown
Get predictions for the first 5 instances in the dataset
###Code
predicted = predict_class(xtrain[0:5])
print[CLASS_VOCAB[class_index] for class_index in predicted]
###Output
_____no_output_____
###Markdown
Get the classification accuracy on the training data
###Code
import numpy as np
accuracy = round(sum(np.array(predict_class(xtrain)) == ytrain)/float(len(ytrain))*100,2)
print 'Classification accuracy: {}%'.format(accuracy)
###Output
_____no_output_____
###Markdown
7. Upload trained model to Cloud Storage
###Code
GCS_MODEL_DIR='models/pytorch/iris_classifier/'
!gsutil -m cp -r {LOCAL_MODEL_DIR} gs://{BUCKET}/{GCS_MODEL_DIR}
!gsutil ls gs://{BUCKET}/{GCS_MODEL_DIR}
###Output
_____no_output_____
###Markdown
Part 2: Prepare the Custom Prediction Package1. Implement a model **custom class** for pre/post processing, as well as loading and using your model for prediction.2. Prepare yout **setup.py** file, to include all the modules and packages you need in your custome model class. 1. Create the custom model classIn the **from_path**, you load the pytorch model that you uploaded to GCS. Then in the **predict** method, you use it for prediction.
###Code
%%writefile model.py
import os
import pandas as pd
from google.cloud import storage
import torch
class PyTorchIrisClassifier(object):
def __init__(self, model):
self._model = model
self.class_vocab = ['setosa', 'versicolor', 'virginica']
@classmethod
def from_path(cls, model_dir):
model_file = os.path.join(model_dir,'model.pt')
model = torch.load(model_file)
return cls(model)
def predict(self, instances, **kwargs):
data = pd.DataFrame(instances).as_matrix()
inputs = torch.Tensor(data)
outputs = self._model(inputs)
_ , predicted = torch.max(outputs, 1)
return [self.class_vocab[class_index] for class_index in predicted]
###Output
_____no_output_____
###Markdown
2. Create a setup.py moduleInclude **pytorch** as a required package, as well as the **model.py** file that includes your custom model class.
###Code
%%writefile setup.py
from setuptools import setup
REQUIRED_PACKAGES = ['torch']
setup(
name="iris-custom-model",
version="0.1",
scripts=["model.py"],
install_requires=REQUIRED_PACKAGES
)
###Output
_____no_output_____
###Markdown
3. Create the package This will create a .tar.gz package under /dist directory. The name of the package will be (name)-(version).tar.gz where (name) and (version) are the ones specified in the setup.py.
###Code
!python setup.py sdist
###Output
_____no_output_____
###Markdown
4. Uploaded the package to GCS
###Code
GCS_PACKAGE_URI='models/pytorch/packages/iris-custom-model-0.1.tar.gz'
!gsutil cp ./dist/iris-custom-model-0.1.tar.gz gs://{BUCKET}/{GCS_PACKAGE_URI}
!gsutil ls gs://{BUCKET}/{GCS_PACKAGE_URI}
###Output
_____no_output_____
###Markdown
Part 3: Deploy the Model to AI Platform for Online Predictions 1. Create AI Platform model
###Code
MODEL_NAME='torch_iris_classifier'
!gcloud ml-engine models create {MODEL_NAME} --regions {REGION}
!echo ''
!gcloud ml-engine models list | grep 'torch'
###Output
_____no_output_____
###Markdown
2. Create AI Platform model versionOnce you have your custom package ready, you can specify this as an argument when creating a version resource. Note that you need to provide the path to your package (as package-uris) and also the class name that contains your custom predict method (as model-class).
###Code
MODEL_VERSION='v1'
RUNTIME_VERSION='1.10'
MODEL_CLASS='model.PyTorchIrisClassifier'
!gcloud alpha ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} \
--origin=gs://{BUCKET}/{GCS_MODEL_DIR} \
--runtime-version={RUNTIME_VERSION} \
--framework='SCIKIT_LEARN' \
--python-version=2.7 \
--package-uris=gs://{BUCKET}/{GCS_PACKAGE_URI}\
--model-class={MODEL_CLASS}
!gcloud ml-engine versions list --model {MODEL_NAME}
###Output
_____no_output_____
###Markdown
Part 4: AI Platform Online Prediction
###Code
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
def estimate(project, model_name, version, instances):
request_data = {'instances': instances}
model_url = 'projects/{}/models/{}/versions/{}'.format(project, model_name, version)
response = api.projects().predict(body=request_data, name=model_url).execute()
#print response
predictions = response["predictions"]
return predictions
instances = [
[6.8, 2.8, 4.8, 1.4],
[6. , 3.4, 4.5, 1.6]
]
predictions = estimate(instances=instances
,project=PROJECT
,model_name=MODEL_NAME
,version=MODEL_VERSION)
print(predictions)
###Output
_____no_output_____ |
first_ml_model_template.ipynb | ###Markdown
My First Machine Learning ModelThis template will help you create your fist machine learning model in 5 minutes. 0. SetupWe provide the initial setup of the notebook. In this section we import the necessary libraries so you can build your model.
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
1. Load the dataThe first step is to load the necessary data. Use the command read_csv from pandas library to load the Iris dataset. After loading the data into a dataframe, show the top of the dataset. The dataset file URL is https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data.
###Code
# load the data
###Output
_____no_output_____
###Markdown
2. Explore and visualize the data 3. Preprocess the data 4. Select an algorithm and train the model 5. Save the model for later use
###Code
###Output
_____no_output_____ |
examples/notebooks/01_base_pipeline.ipynb | ###Markdown
Baseline HuggingFace PipelineAs an initial baseline, we should see how well the text summarization Seq2Seq models in HuggingFace are capable of. Let's load sentences from Framenet and use a pre-trained `t5` model for summarization. We will then compare the model's summarization with the frame definitions associated with that sentence.
###Code
import json
import nltk
from transformers import pipeline
from nltk.corpus.reader import framenet
!ls ../
def load_fundraising_example():
with open("../data/a_guide_to_seed_fundraising.json", "r") as f:
sample = json.load(f)
return sample
datapath = "/home/ygx/dat/fndata-1.7/"
fn = framenet.FramenetCorpusReader(datapath, fileids=None)
###Output
_____no_output_____
###Markdown
Framenet SentencesFramenet contains sentences along with their associated frames. These sentence lengths would be on the order of the length of search queuries a user would make on a natural language search engine. We can use this to compare our summarization models.
###Code
sentences = fn.sents()
# Sample sentence and associated frame
for idx, sent in enumerate(sentences):
print(f"\nSentence {idx}:\n\n\t{sent.text}")
print(f"\nFrame:\n\n\t{sent.frame.name}")
print(f"\nFrame definition:\n\n\t{sent.frame.definition}")
if idx == 0:
break
# Simplest and most opaque interface for summarization
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base")
sample_sentences = [sentences[idx].text for idx in range(10)]
sample_frames = [sentences[idx].frame.name for idx in range(10)]
sample_summaries = []
for sample in sample_sentences:
summary = summarizer(sample, min_length=5, max_length=20)
sample_summaries.append(summary)
sample_summaries
sample_frames
###Output
_____no_output_____
###Markdown
Fundraising ExampleUsing the same `t5` model, we can test the summary on the fundraiser sample data.
###Code
sample = load_fundraising_example()
summary = summarizer(sample["text"], min_length=5, max_length=20)
summary
sample["text"]
###Output
_____no_output_____ |
STEM/Klimaat/0400_Morteratschgletsjer.ipynb | ###Markdown
SMELTENDE GLETSJERS: DE MORTERATSCHGLETSJER In deze notebook visualiseer je data over de evolutie in de omvang van een gletsjer: je maakt een puntenwolk van de data uit een csv-bestand, je bekijkt de terugtrekking van de gletsjer. Sinds de industriële revolutie is de concentratie broeikasgassen in de atmosfeer stelselmatig toegenomen. Sinds 1880 is de gemiddelde globale temperatuur ongeveer met 0,85 °C gestegen. Deze opwarming gaat gepaard met een opwarming van de oceanen, een stijging van het zeeniveau met 20 cm, het meer voorkomen van extreme weersomstandigheden en een afname van 40 % van het Arctische zee-ijs. Ook het gletsjerijs smelt, bijna overal ter wereld.Het smeltwater afkomstig van gebergtegletsjers zal in belangrijke mate bepalen hoeveel het zeeniveau in de toekomst zal stijgen. Mogelijke scenario's spreken van een stijging tot 30 cm door het afsmelten van de gebergtegletsjers. Bovendien hebben gletsjers een impact op lokale watervoorraden en zijn ze belangrijk voor het toerisme.De snelheid waarmee het volume van een gletsjer afneemt o.i.v. de globale temperatuurstijging verschilt van gletsjer tot gletsjer. Lokale factoren spelen hierin immers een rol: bv. de oriëntatie van de gletsjer, de mate waarin de gletsjer in de schaduw ligt ... [1]. De 6 km lange Morteratschgletsjer bevindt zich in Zwitserland en ligt een groot deel van het jaar in de schaduw van de omringende bergtoppen [1]. Foto: Morteratsch 2018, © Lander Van Tricht. Op de foto is duidelijk te zien hoe breed de gletsjer zich vroeger uitstrekte. Lander Van Tricht (VUB) verschafte ons data van zijn onderzoek naar hoe de Morteratschgletsjer evolueert. Sinds 1880 werd geregistreerd hoeveel meter de gletsjer jaarlijks terugtrekt [2]. Opdracht- De bedoeling is om een puntenwolk te tekenen die de terugtrekking van de Morteratschletsjer geeft in functie van het jaartal.- Kies daartoe de geschikte variabelen en de nodige lijsten.- Vervolgens maak je een lijndiagram van de totale terugtrekking.- Tot slot stel je de evolutie in lengte voor in een grafiek. De nodige modules importeren
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
1. Inlezen van de data
###Code
morteratsch = pd.read_csv("data/morteratsch.csv")
morteratsch.head()
morteratsch.tail()
###Output
_____no_output_____
###Markdown
Kies geschikte variabelen en maak de nodige lijsten op.
###Code
# tabel met 131 rijen en 3 kolommen
# eerste kolom bevat naam van de gletsjer
# tweede kolom komt overeen met jaartal van de meting, derde met terugtrekking van de gletsjer in meter
# je werkt met tweede en derde kolom van de tabel
...
###Output
_____no_output_____
###Markdown
2. Data weergeven in puntenwolk Geef de terugtrekking (in meter) weer in functie van het jaartal. Gebruik een spreidingsdiagram. Geef daartoe de gepaste code in en voer uit. 3. Opdracht: totale terugtrekking Bewerkingen die je kunt uitvoeren met NumPy-lijsten, vind je terug in de notebook 'Enkele bewerkingen bij NumPy-lijsten'. - Hoeveel trok de gletsjer terug in het jaar 1900? Antwoord: - Bereken m.b.v. een Python-script hoeveel meter de Morteratschgletsjer al is afgenomen in lengte sinds 1880. Antwoord: - Bereken m.b.v. een Python-script hoeveel meter de Morteratsch-gletsjer al is afgenomen in lengte sinds 2000 (het jaar 2000 inbegrepen). Antwoord: - Maak een grafiek van de totale terugtrekking van de Morteratsch-gletsjer. Voor elke jaartal moet je op de y-as kunnen aflezen hoeveel de gletsjer al is teruggetrokken sinds 1880. Vul daartoe de code aan en voer uit.
###Code
# maak nieuwe NumPy-lijst z die even lang is als de NumPy-lijst y, maar enkel nullen bevat
# vul z vervolgens op met de cumulatieve waarden voor de terugtrekking
z = np.zeros(len(y))
print(z)
z[0] = y[0]
for i in range(1, len(z)):
z[i] = ...
print(z)
# grafiek
###Output
_____no_output_____ |
numpy-cheat-sheet.ipynb | ###Markdown
NumpyThe goal of this notebook is to collect all frequently (or not that frequently) used capabilities of *numpy*. Arrays and indexing Creation From python list One can create a numpy array from usual python list
###Code
arr = [1,2,3,4,5]
np.array(arr) # now it's a numpy array
###Output
_____no_output_____
###Markdown
Same works for matrices
###Code
matrix = [[1,2,3], [4,5,6], [7,8,9]]
np.array(matrix) # now it's a two-dimensional numpy array
###Output
_____no_output_____
###Markdown
And even for higher dimensions
###Code
tensor = [
[[1,2,3], [4,5,6], [7,8,9]],
[[10,11,12], [13,14,15], [16,17,18]]
]
np.array(tensor) # now it's a three-dimensional numpy array
###Output
_____no_output_____
###Markdown
Data types It's possible to specify the type of array's elements via *dtype* argument
###Code
arr = [1,2,3,4,5]
np.array(arr, dtype=int)
arr = [1,2,3,4,5]
np.array(arr, dtype=float)
arr = [1,2,3,4,5]
np.array(arr, dtype=complex)
arr = [0,1,2,3,4,5]
np.array(arr, dtype=bool)
###Output
_____no_output_____
###Markdown
If the type is not given, it will be determined as the minimum type required to hold the objects in the sequence.
###Code
arr = [1,2,3,4,5,6.]
np.array(arr)
print(np.array(arr).dtype)
###Output
float64
###Markdown
Minimum number of dimensions One can specify minimum number of dimensions with optional *ndmin* argument. Ones will be pre-pended to the shape as needed to meet this requirement.
###Code
np.array([1,2,3], ndmin=2)
###Output
_____no_output_____
###Markdown
Numpy methods arange *np.arange* generates an evenly spaced sequence of numbers within a given interval
###Code
np.arange(0, 10)
###Output
_____no_output_____
###Markdown
Start can be omitted
###Code
np.arange(10)
###Output
_____no_output_____
###Markdown
One may specify a step and/or a dtype
###Code
np.arange(0,10,2)
np.arange(0, 11, 2)
np.arange(0, 10, 2.5, dtype='complex')
###Output
_____no_output_____
###Markdown
linspace *np.linspace(start, end, n)* returns $n$ evenly spaced points between $start$ and $end$. Might be a good idea to use when you know number of points beforehead. Might be easier than specifiying the step for *np.arange*
###Code
np.linspace(0, 5, num=10)
###Output
_____no_output_____
###Markdown
Endpoint can be excluded
###Code
np.linspace(0, 5, num=10, endpoint=False)
###Output
_____no_output_____
###Markdown
*retstep* parameter indicates if the step should be returned
###Code
np.linspace(0, 5, num=10, endpoint=False, retstep=True)
###Output
_____no_output_____
###Markdown
zeros *np.zeros* generates a vector/matrix/tensor filled with zeros
###Code
np.zeros(~5)
np.zeros((5,5))
np.zeros((2,5,5), dtype=int)
###Output
_____no_output_____
###Markdown
It's possible to specify how to store the matrix in memory: either row-wise (C-style) or column-wise (Fortran-style)
###Code
np.zeros((2,2), order='C')
np.zeros((2,2), order='F')
###Output
_____no_output_____
###Markdown
ones *np.ones* generates a vector/matrix/tensor filled with ones
###Code
np.ones(5)
np.ones((2,3))
np.ones((2,3,4), dtype=int)
###Output
_____no_output_____
###Markdown
eye Creates an identity matrix
###Code
np.eye(4)
np.eye(4,3)
np.eye(4,5)
###Output
_____no_output_____
###Markdown
It's possible to "shift" the main diagonal with *k* parameter. Positive *k* shifts the diagonal to the top, negative -- to the bottom.
###Code
np.eye(4, k=1)
np.eye(4, k=-1)
# Not very elegant way to construct a matrix containing ones only
result = np.zeros((4,4))
for k in range(-4, 5):
result += np.eye(4, k=k)
result
###Output
_____no_output_____
###Markdown
*np.identity* method does almost the same as *np.eye* but it's not that flexible
###Code
np.identity(5)
###Output
_____no_output_____
###Markdown
Random seed One should fix a seed to obtain a reproducible result
###Code
np.random.seed(42)
###Output
_____no_output_____
###Markdown
rand Generates a random array of given shape. Numbers are uniformly distributed on $[0,1]$ interval. Just get a random number between $0$ and $1$:
###Code
np.random.rand()
###Output
_____no_output_____
###Markdown
###Code
np.random.rand(5)
###Output
_____no_output_____
###Markdown
You should pass shape **not** as a tuple, but as positional arguments
###Code
np.random.rand(2, 3)
np.random.rand(2,2,2)
###Output
_____no_output_____
###Markdown
Do this if you want to get a random number uniformly distributed on $[a, b]$, $length = b - a$
###Code
length = 5
a = 2
# random number on [2, 7]
length * np.random.rand() + a
length = 2
a = -1
# random array on [-1, 1]
length * np.random.rand(5) + a
###Output
_____no_output_____
###Markdown
randn Use *np.random.randn* if you want to generate random numbers with Gaussian (standard normal) distribution $\mathcal{N}(0, 1)$
###Code
np.random.randn()
np.random.randn(2,3)
###Output
_____no_output_____
###Markdown
To generate numbers from $\mathcal{N}(\mu, \sigma^2)$ use this
###Code
mu = 3
sigma = 2
# random number from Gaussian distribution with mean=3 and variance=2
sigma * np.random.randn() + mu
mu = 3
sigma = 2
# random array from Gaussian distribution with mean=3 and variance=2
sigma * np.random.randn(5) + mu
###Output
_____no_output_____
###Markdown
randint
###Code
np.random.randint(1, 100, 10)
###Output
_____no_output_____ |
guide/06-working-with-big-data/making-your-data-accessible-to-the-gis.ipynb | ###Markdown
Making your data accessible to the GISBig data is popularly characterized with 4 v's - - high **volume**: large quantity of data that cannot be analyzed in a traditional manner using the memory available one a single machine, - high **velocity**: data that is not just static but can also arrive from streaming sources, - large **variety**: formats that are tabular, non tabular, spatial, non spatial from a variety of sources - unknown **veracity**: data that is not pre-processed or screened and of unknown quality. Big data file sharesGiven the enormity and uncertainty in such kinds of data, the GeoAnalytics server allows you register your big datasets in a format called a big data file share. Big data file shares can reference data in the following data sources - file share - a directory of datasets - HDFS - a Hadoop Distributed Files System directory of datasets - Hive - metastore databasesStoring your data in a Big data file share datastore has the following benefits - the GeoAnalytics tools read your data only when they are executed. This allows you to keep updating or adding new data to these locations. - you can partition your data, say using file system folders, yet treat them as a single dataset - big data file shares are flexible in how time and geometry are defined. This allows you to have data in multiple formats even in a single dataset. Preparing your dataTo register a file share or a HDFS, you need to format your datasets as sub folders within a single parent folder and register that folder. This parent folder you register becomes a `datastore` and each of the sub folder becomes a `dataset`. For instance, to register 2 datastores representing earthquakes and hurricanes, your folder hierarchy would look like below:```|---FileShareFolder |---Earthquakes <-- register as a datastore |---1960 <-- dataset 1 |---01_1960.csv |---02_1960.csv |---1961 <-- dataset 2 |---01_1961.csv |---02_1961.csv |---Hurricanes <-- register as a datastore |---atlantic_hur.shp |---pacific_hur.shp```To learn more about preparing your data for use with GeoAnalytics server, refer to this [server documentation](http://server.arcgis.com/en/server/latest/get-started/windows/what-is-a-big-data-file-share.htm). Searching for big data file sharesThe `get_datastores()` method of the `geoanalytics` module returns you a `DatastoreManager` object that lets you search for and manage `Datastore` objects on your GeoAnalytics server.
###Code
# Connect to enterprise GIS
from arcgis.gis import GIS
import arcgis.geoanalytics
portal_gis = GIS("portal url", "username", "password")
bigdata_datastore_manager = arcgis.geoanalytics.get_datastores()
bigdata_datastore_manager
###Output
_____no_output_____
###Markdown
Use the `search()` method on a `DatastoreManager` object to search for `Datastore`s
###Code
bigdata_fileshares = bigdata_datastore_manager.search()
bigdata_fileshares
###Output
_____no_output_____
###Markdown
Get datasets from a big data file share datastoreUse the `datasets` property on a `Datastore` object to get a dictionary representation of the datasets.
###Code
Chicago_accidents = bigdata_fileshares[0]
len(Chicago_accidents.datasets)
# let us view the first dataset for a sample
Chicago_accidents.datasets[0]
###Output
_____no_output_____
###Markdown
Registering big data file sharesYou can register your data as a big data file share using the `add_bigdata()` method on a `DatastoreManager` object. Ensure the datasets are stored in a format compatible with the GeoAnalytics server as seen earlier in this guide.
###Code
NYC_data_item = bigdata_datastore_manager.add_bigdata("NYCdata2",
r"\\teton\atma_shared\datasets\NYC_taxi")
NYC_data_item
###Output
_____no_output_____
###Markdown
Once a big data file share is created, the GeoAnalytics server processes all the valid file types to discern the schema of the data. This process can take a few minutes depending on the size of your data. Once processed, querying the `manifest` property returns the schema.
###Code
NYC_data_item.manifest
###Output
_____no_output_____ |
docs/walkthrough/working-with-a-bucket.ipynb | ###Markdown
Quilt allows you to create, read, and write packages both on your local filesystem and on S3 buckets configured to work with Quilt3. For convenience, we provide a simple API for working with S3 buckets that serves as an alternative to [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html). Connecting to a bucketTo connect to an S3 `Bucket`:
###Code
import quilt3
b = quilt3.Bucket("s3://quilt-example")
###Output
_____no_output_____
###Markdown
This requires that the bucket is configured to work with Quilt 3. Unless this bucket is public, you will also first need to log into the catalog that controls this bucket:```python only need to run this once ie quilt3.config('https://your-catalog-homepage/')quilt3.config('https://open.quiltdata.com/') follow the instructions to finish loginquilt3.login()``` Introspecting a bucketTo see the contents of a `Bucket`, use `keys`:
###Code
# returns a list of objects in the bucket
b.keys()
###Output
_____no_output_____
###Markdown
Reading from a bucketTo download a file or folder from a bucket use `fetch`:
###Code
# b.fetch("path/to/directory", "path/to/local")
b.fetch("aleksey/hurdat/", "./aleksey/")
b.fetch("README.md", "./read.md")
###Output
100%|██████████| 4.07M/4.07M [00:13<00:00, 304kB/s]
100%|██████████| 1.55k/1.55k [00:01<00:00, 972B/s]
###Markdown
Writing to a bucketYou can write data to a bucket. ```python put a file to a bucketb.put_file("read.md", "./read.md") or put everything in a directory at onceb.put_dir("stuff", "./aleksey")``` Note that `set` operations on a `Package` are `put` operations on a `Bucket`. Deleting objects in a bucket ```python always be careful when deleting delete a fleb.delete("read.md") delete a directoryb.delete_dir("stuff/")``` Searching in a bucketYou can search for individual objects using `search`.Note that this feature is currently only supported for buckets backed by a Quilt catalog instance. Before performing a search you must first configure a connection to that instance using `quilt3.config`.
###Code
# for example
quilt3.config(navigator_url="https://open.quiltdata.com")
###Output
_____no_output_____
###Markdown
Quilt supports unstructured search:
###Code
# returns all files containing the word "thor"
b.search("thor")
###Output
_____no_output_____
###Markdown
As well as structured search on metadata (note that this feature is experimental):
###Code
# returns all files annotated {'name': 'thor'}
b.search("user_meta.name:'thor'")
###Output
_____no_output_____ |
TeamSafetyRecommenders/DEMO.ipynb | ###Markdown
Demostration:Here we will demostrate our model, study the predictions and talk about how to further improve our model.
###Code
import os
import requests
import datetime
import pickle
import pandas as pd
import numpy as np
from sklearn.metrics import classification_report
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import LabelEncoder
import ipywidgets as widgets
from sklearn import preprocessing
from sklearn.preprocessing import StandardScaler
#Panda settings
#Pandas will not display all columns in our data when using the head() function without this
pd.set_option('max_columns',50)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
###Output
_____no_output_____
###Markdown
Building Glossary This is a very important part. When making individual predictions, we need to encode and scale our data before running the chosen model. If we encode data from an new instance the model will predict without problems but the encoding will not match the encoding that was done to the dataframe and that was fed to the model. In order to deal with this we need to map avery single instance with its correspondent encoded value by building a glossary:
###Code
with open('x_y_z.pickle', 'rb') as data:
Data = pickle.load(data)
X = Data[0]
y = Data[1]
z = Data[2]
X_raw = Data[3]
X = X[["hour","street", "month", "day", "LATITUDE", "LONGITUDE", "Temperature"]]
X_raw = X_raw[["hour","street", "month", "day", "LATITUDE", "LONGITUDE", "Temperature"]]
###Output
_____no_output_____
###Markdown
Identifiying minimum and maximum values from the numerical features
###Code
print(X_raw['LATITUDE'].min())
print(X_raw['LATITUDE'].max())
print(X_raw['LONGITUDE'].min())
print(X_raw['LONGITUDE'].max())
print(X_raw['day'].min())
print(X_raw['day'].max())
print(X_raw['Temperature'].min())
print(X_raw['Temperature'].max())
print(X_raw['hour'].min())
print(X_raw['hour'].max())
X_raw.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 277795 entries, 2010-06-11 04:00:00 to 2018-06-11 03:00:00
Data columns (total 7 columns):
hour 277795 non-null float64
street 277795 non-null object
month 277795 non-null float64
day 277795 non-null float64
LATITUDE 277795 non-null float64
LONGITUDE 277795 non-null float64
Temperature 277795 non-null float64
dtypes: float64(6), object(1)
memory usage: 17.0+ MB
###Markdown
Creating the encoded - unencoded translator function:
###Code
def decoder(encoded, unencoded):
Glossary = {}
for i, j in zip(unencoded, encoded):
Glossary[i] = j
return (Glossary)
gloss = decoder(X['LATITUDE'], X_raw['LATITUDE'])
print(gloss)
###Output
{38.897: 77, 38.942: 122, 38.886: 66, 38.889: 69, 38.906: 86, 38.918: 98, 38.898: 78, 38.926: 106, 38.95: 130, 38.915: 95, 38.894: 74, 38.863: 43, 38.875: 55, 38.957: 137, 38.871: 51, 38.885: 65, 38.844: 24, 38.904: 84, 38.908: 88, 38.89: 70, 38.967: 147, 38.907: 87, 38.954: 134, 38.9: 80, 38.88: 60, 38.905: 85, 38.925: 105, 38.928: 108, 38.931: 111, 38.948: 128, 38.934: 114, 38.903: 83, 38.956: 136, 38.833: 13, 38.909: 89, 38.944: 124, 38.919: 99, 38.91: 90, 38.883: 63, 38.929: 109, 38.881: 61, 38.895: 75, 38.945: 125, 38.941: 121, 38.899: 79, 38.892: 72, 38.872: 52, 38.901: 81, 38.936: 116, 38.847: 27, 38.94: 120, 38.927: 107, 38.92: 100, 38.829: 9, 38.854: 34, 38.916: 96, 38.825: 5, 38.923: 103, 38.842: 22, 38.914: 94, 38.868: 48, 38.873: 53, 38.939: 119, 38.911: 91, 38.937: 117, 38.958: 138, 38.888: 68, 38.922: 102, 38.822: 2, 38.946: 126, 38.952: 132, 38.893: 73, 38.884: 64, 38.869: 49, 38.848: 28, 38.857: 37, 38.93: 110, 38.983: 163, 38.935: 115, 38.949: 129, 38.832: 12, 38.932: 112, 38.874: 54, 38.891: 71, 38.96: 140, 38.972: 152, 38.84: 20, 38.924: 104, 38.902: 82, 38.856: 36, 38.845: 25, 38.921: 101, 38.953: 133, 38.933: 113, 38.912: 92, 38.87: 50, 38.896: 76, 38.917: 97, 38.851: 31, 38.913: 93, 38.961: 141, 38.943: 123, 38.858: 38, 38.878: 58, 38.876: 56, 38.882: 62, 38.951: 131, 38.859: 39, 38.963: 143, 38.955: 135, 38.836: 16, 38.828: 8, 38.959: 139, 38.969: 149, 38.849: 29, 38.861: 41, 38.985: 165, 38.865: 45, 38.862: 42, 38.887: 67, 38.879: 59, 38.965: 145, 38.938: 118, 38.877: 57, 38.864: 44, 38.855: 35, 38.973: 153, 38.852: 32, 38.947: 127, 38.823: 3, 38.986: 166, 38.83: 10, 38.85: 30, 38.991: 171, 38.826: 6, 38.987: 167, 38.974: 154, 38.835: 15, 38.839: 19, 38.975: 155, 38.86: 40, 38.971: 151, 38.964: 144, 38.977: 157, 38.838: 18, 38.962: 142, 38.966: 146, 38.834: 14, 38.831: 11, 38.846: 26, 38.827: 7, 38.98: 160, 38.843: 23, 38.981: 161, 38.821: 1, 38.866: 46, 38.976: 156, 38.989: 169, 38.824: 4, 38.97: 150, 38.853: 33, 38.994: 174, 38.867: 47, 38.988: 168, 38.968: 148, 38.837: 17, 38.979: 159, 38.982: 162, 38.978: 158, 38.841: 21, 38.984: 164, 38.99: 170, 38.992: 172, 38.82: 0, 38.993: 173, 38.995: 175}
###Markdown
Dine Out, Order_in: Importing the estimators:
###Code
with open('offensegroup_upsampled_model.pickle', 'rb') as model:
estimators = pickle.load(model)
# print(estimators) # 0: knn, 1: RandomForest, 2: SGD, 3: BaggingClassifier, 4: PassiveAgressiveClassifier
###Output
_____no_output_____
###Markdown
Creating the user interface:
###Code
hour = widgets.BoundedFloatText(value = 5, min=0, max=23, description='Hour:')
street = widgets.Text(value = 'ridge road se', description='Street Name:')
month = widgets.BoundedFloatText(value = 6, min = 1, max = 12, description='Month:')
day = widgets.BoundedFloatText(value = 5, min = 1, max = 31, description='Day:')
lat = widgets.FloatText(value = 38.886, min = 38.820, max = 38.995 , step=0.001, description='LATITUDE:')
lon = widgets.FloatText(value = -76.949, min = -77.114, max = -76.910, step=0.001, description='LONGITUDE:')
temp = widgets.BoundedFloatText(value = 62, min = -10, max = 130, description='Temperature:')
def funct1(hour, street, month, day, lat, lon, temp):
dictionary = {'hour': [],'street': [],
'month': [], 'day': [], 'LATITUDE': [],
'LONGITUDE': [], 'Temperature': []}
Hour = decoder(X['hour'], X_raw['hour'])
Street = decoder(X['street'], X_raw['street'])
Month = decoder(X['month'], X_raw['month'])
Day = decoder(X['day'], X_raw['day'])
Lat = decoder(X['LATITUDE'], X_raw['LATITUDE'])
Lon = decoder(X['LONGITUDE'], X_raw['LONGITUDE'])
Temp = decoder(X['Temperature'], X_raw['Temperature'])
dictionary['hour'] = [Hour[hour.value]]
dictionary['street'] = [Street[street.value]]
dictionary['month'] = [Month[month.value]]
dictionary['day'] = [Day[day.value]]
dictionary['LATITUDE'] = [Lat[lat.value]]
dictionary['LONGITUDE'] = [Lon[lon.value]]
dictionary['Temperature'] = [Temp[temp.value]]
return dictionary
#return Hour[hour.value], Street[street.value], Month[month.value], Day[day.value], Lat[lat.value], Lon[lon.value], Temp[temp.value]
def funct2(b):
try:
scaler = StandardScaler()
data = funct1(hour, street, month, day, lat, lon, temp)
data = pd.DataFrame(data)
#data = np.array(data)
#data = data.reshape(-1,1)
data = scaler.fit(data).transform(data)
results = estimators[1].predict(data)
final_results = []
for i in results:
if i == 0:
i = 'Dine out'
else:
i = 'Order In'
final_results.append(i)
print (final_results)
except:
print("Value is not in the database, we cannot predict crime safety for this particular place and time, Please try again")
button = widgets.Button(description="Predict!")
display(hour,street,month,day,lat,lon,temp,button)
button.on_click(funct2)
38.8980464,-77.0354491
###Output
_____no_output_____
###Markdown
Testing on the whole dataset:
###Code
38.8658045,-76.9913079
scaler = StandardScaler()
test = scaler.fit(X).transform(X)
t = estimators[3].predict(X)
t = pd.Series(t)
t.value_counts()
###Output
_____no_output_____
###Markdown
It is very interesting that here the only model that performed consistent predictions was the bagging classifier.Ihe "Order In"/"Dine Out" proportion is very close with the real "offensegroup" variableKNN was the most accurate model but it only predict 0('Dine Out') values. Dine out/metro, Dine out, Order In
###Code
with open('ucrrank_upsampled_model.pickle', 'rb') as model:
estimators_ucr = pickle.load(model)
def funct3(b):
#try:
scaler = StandardScaler()
data = funct1(hour, street, month, day, lat, lon, temp)
data = pd.DataFrame(data)
data = scaler.fit(data).transform(data)
results_ucr = estimators_ucr[3].predict(data)
final_results_ucr = []
for i in results_ucr:
if i == 3:
i = 'Order In'
elif i == 2:
i = 'Dine Out/Metro'
else:
i = 'Dine Out'
final_results_ucr.append(i)
print(final_results_ucr)
#except:
# print("Value is not in the database, we cannot predict crime safety for this particular place and time, Please try again")
hour = widgets.BoundedFloatText(value = 5, min=0, max=23, description='our:')
street = widgets.Text(value = 'ridge road se', description='Street Name:')
month = widgets.BoundedFloatText(value = 6, min = 1, max = 12, description='Month:')
day = widgets.BoundedFloatText(value = 5, min = 1, max = 31, description='Day:')
lat = widgets.FloatText(value = 38.886, min = 38.820, max = 38.995 , step=0.001, description='LATITUDE:')
lon = widgets.FloatText(value = -76.949, min = -77.114, max = -76.910, step=0.001, description='LONGITUDE:')
temp = widgets.BoundedFloatText(value = 62, min = -10, max = 130, description='Temperature:')
button = widgets.Button(description="Predict!")
display(hour,street,month,day,lat,lon,temp,button)
button.on_click(funct3)
scaler = StandardScaler()
test = scaler.fit(X).transform(X)
r = estimators_ucr[3].predict(X)
r = pd.Series(r)
#r.head(1000)
r.value_counts()
###Output
_____no_output_____
###Markdown
Classification reports: (from the last Kfold)
###Code
reports = [ 'offensegroup_upsampled_report.pickle', 'offensegroup_downsampled_report.pickle',
'ucrrank_upsampled_report.pickle', 'ucrrank_downsampled_report.pickle']
classification_matrix = []
with open('offensegroup_upsampled_report.pickle', 'rb') as upsampled_model_offensegroup:
estimator_upsampled_offensegroup = pickle.load(upsampled_model_offensegroup)
with open('ucrrank_upsampled_report.pickle', 'rb') as upsampled_model_ucrrank:
estimator_upsampled_ucrrank = pickle.load(upsampled_model_ucrrank)
###Output
_____no_output_____
###Markdown
list order: [predicted_knn, predicted_rnf, predicted_sgd, predicted_baggin, predicted_pas, report_knn, report_rnf, report_sgd, report_baggin, report_pas]
###Code
print("Classification Report for Upsampled Knn on offensegroup: \n\n"+estimator_upsampled_offensegroup[5])
print("==============================================================================")
print("Classification Report for Upsampled RandomForest on offensegroup: \n\n"+estimator_upsampled_offensegroup[6])
print("==============================================================================")
# 0 Nonviolent, 1 is violent.
print("Classification Report for Upsampled BaggingClassifier on offensegroup: \n\n"+estimator_upsampled_offensegroup[8])
###Output
Classification Report for Upsampled Knn on offensegroup:
precision recall f1-score support
0 0.68 0.63 0.65 19218
1 0.66 0.70 0.68 19218
avg / total 0.67 0.67 0.67 38436
==============================================================================
Classification Report for Upsampled RandomForest on offensegroup:
precision recall f1-score support
0 0.64 0.94 0.76 19218
1 0.89 0.48 0.62 19218
avg / total 0.77 0.71 0.69 38436
==============================================================================
Classification Report for Upsampled BaggingClassifier on offensegroup:
precision recall f1-score support
0 0.63 0.64 0.63 19218
1 0.63 0.62 0.63 19218
avg / total 0.63 0.63 0.63 38436
###Markdown
50 Restaurant locations:
###Code
locations = pd.read_csv('locations.csv')
###Output
_____no_output_____
###Markdown
Add current year, month, day and hour to the locations dataframe
###Code
now = datetime.datetime.now()
print("Current date and time using str method of datetime object:")
print(str(now))
def label_locations_month (row):
month = int(now.month)
return month
locations['month'] = locations.apply (lambda row: label_locations_month (row),axis=1)
def label_locations_day (row):
day = int(now.day)
return day
locations['day'] = locations.apply (lambda row: label_locations_day (row),axis=1)
def label_locations_hour (row):
hour = int(now.hour)
return hour
locations['hour'] = locations.apply (lambda row: label_locations_hour (row),axis=1)
###Output
_____no_output_____
###Markdown
Add current temperature to locations dataframe
###Code
URL = "http://api.openweathermap.org/data/2.5/weather?q=Washington,us&units=imperial&appid=c267487c712e3fa110ff1d1b9eccc88b"
def fetch_data(fname="weather"):
"""
Helper method to retrieve the ML Repository dataset.
"""
response = requests.get(URL)
outpath = os.path.abspath(fname)
with open(outpath, 'wb') as f:
f.write(response.content)
return outpath
DATA = fetch_data()
import json
with open('weather') as json_data:
current_weather = json.load(json_data)
print(current_weather)
def label_locations_temp (row):
temperature = current_weather['main']['temp']
return temperature
locations['temperature'] = locations.apply (lambda row: label_locations_temp (row),axis=1)
temperature = locations['temperature'][0]
temperature
locations['temperature'] = locations['temperature'].round(decimals = 0)
locations.head(5)
###Output
_____no_output_____
###Markdown
Prepare dataframes for demo
###Code
locations_business = locations.drop(['id', 'categories', 'category01', 'city', 'coordinates', 'country', 'image_url', 'location_id', 'phone', 'rating', 'review_count', 'state', 'street_name', 'street_number', 'url', 'zip_code'], axis=1)
locations_ML = locations.drop(['business_name'], axis=1)
locations_ML = locations_ML[['longitude', 'latitude', 'month', 'day', 'hour', 'temperature', 'street_name']]
locations_ML = locations_ML.rename(columns={"longitude": "LONGITUDE", "latitude": "LATITUDE", "temperature": "Temperature",
"hour": "hour", "day": "day", "street_name": "street"})
locations_ML = locations_ML.dropna()
locations_ML['LATITUDE'] = locations_ML['LATITUDE'].round(decimals = 3)
locations_ML['LONGITUDE'] =locations_ML['LONGITUDE'].round(decimals = 3)
locations_ML['street']= locations_ML['street'].replace("st", "St")
lis = []
for row in locations_ML['street']:
row = row.replace("st", "St")
lis.append(row)
locations_ML['street'] = lis
locations_ML['street'] = locations_ML['street'].replace(["pennsylvania ave nw", "wisconsin ave nw","sherman ave nw",
"connecticut ave nw", "new hampshire ave nw","massachusetts ave nw",
"georgia ave nw", "rhode island ave nw", "pennsylvania ave se"],
["pennsylvania avenue nw", "wisconsin avenue nw","sherman avenue nw",
"connecticut avenue nw", "new hampshire avenue nw", "massachusetts avenue nw",
"georgia avenue nw", "rhode island avenue nw", "pennsylvania avenue se"])
locations_ML = locations_ML.drop(locations_ML.index[16])
locations_ML = locations_ML.drop(locations_ML.index[31])
#locations_ML = locations_ML.drop(locations_ML.index[31])
locations_ML
def proper_encode(hour, street, month, day, lat, lon, temp):
dictionary = {'hour': [],'street': [],
'month': [], 'day': [], 'LATITUDE': [],
'LONGITUDE': [], 'Temperature': []}
Hour = decoder(X['hour'], X_raw['hour'])
Street = decoder(X['street'], X_raw['street'])
Month = decoder(X['month'], X_raw['month'])
Day = decoder(X['day'], X_raw['day'])
Lat = decoder(X['LATITUDE'], X_raw['LATITUDE'])
Lon = decoder(X['LONGITUDE'], X_raw['LONGITUDE'])
Temp = decoder(X['Temperature'], X_raw['Temperature'])
dictionary['hour'] = [Hour[hour.value]]
dictionary['street'] = [Street[street.value]]
dictionary['month'] = [Month[month.value]]
dictionary['day'] = [Day[day.value]]
dictionary['LATITUDE'] = [Lat[lat.value]]
dictionary['LONGITUDE'] = [Lon[lon.value]]
dictionary['Temperature'] = [Temp[temp.value]]
return dictionary
def proper_encode(df):
dictionary = {'hour': [],'street': [],
'month': [], 'day': [], 'LATITUDE': [],
'LONGITUDE': [], 'Temperature': []}
Hour = decoder(X['hour'], X_raw['hour'])
Street = decoder(X['street'], X_raw['street'])
Month = decoder(X['month'], X_raw['month'])
Day = decoder(X['day'], X_raw['day'])
Lat = decoder(X['LATITUDE'], X_raw['LATITUDE'])
Lon = decoder(X['LONGITUDE'], X_raw['LONGITUDE'])
Temp = decoder(X['Temperature'], X_raw['Temperature'])
for colname, col in df.iteritems():
dictionary['hour'] = [Hour[col]]
def proper_encode(hour, street, month, day, lat, lon, temp):
dictionary = {'hour': [],'street': [],
'month': [], 'day': [], 'LATITUDE': [],
'LONGITUDE': [], 'Temperature': []}
Hour = decoder(X['hour'], X_raw['hour'])
Street = decoder(X['street'], X_raw['street'])
Month = decoder(X['month'], X_raw['month'])
Day = decoder(X['day'], X_raw['day'])
Lat = decoder(X['LATITUDE'], X_raw['LATITUDE'])
Lon = decoder(X['LONGITUDE'], X_raw['LONGITUDE'])
Temp = decoder(X['Temperature'], X_raw['Temperature'])
for i,j,k,l,m,n, o in zip(hour, street, month, day, lat, lon, temp):
dictionary['hour'].append(Hour[i])
dictionary['street'].append(Street[j])
dictionary['month'].append(Month[k])
dictionary['day'].append(Day[l])
dictionary['LATITUDE'].append(Lat[m])
dictionary['LONGITUDE'].append(Lon[n])
dictionary['Temperature'].append(Temp[o])
return dictionary
locations_ML = proper_encode(locations_ML['hour'],locations_ML['street'],locations_ML['month'],
locations_ML['day'], locations_ML['LATITUDE'],
locations_ML['LONGITUDE'], locations_ML['Temperature'] )
locations_ML
locations_ML = pd.DataFrame(locations_ML)
locations_ML
scaler = StandardScaler()
test = scaler.fit(locations_ML).transform(locations_ML)
t = estimators[3].predict(locations_ML)
t = pd.Series(t)
t
###Output
_____no_output_____ |
examples/WatsonOpenScaleAndAzureMLengineExampleOutput.ipynb | ###Markdown
Preview table content.
###Code
subscription.feedback_logging.show_table()
###Output
_____no_output_____
###Markdown
Describe table (calulcate basic statistics).
###Code
subscription.feedback_logging.describe_table()
###Output
AGE
count 11.0
mean 27.0
std 0.0
min 27.0
25% 27.0
50% 27.0
75% 27.0
max 27.0
###Markdown
Get table content.
###Code
feedback_pd = subscription.feedback_logging.get_table_content(format='pandas')
###Output
_____no_output_____
###Markdown
6.3 Quality metrics table
###Code
subscription.quality_monitoring.print_table_schema()
subscription.quality_monitoring.show_table()
###Output
_____no_output_____
###Markdown
6.4 Performance metrics table
###Code
subscription.performance_monitoring.print_table_schema()
subscription.performance_monitoring.show_table()
###Output
_____no_output_____
###Markdown
6.5 Data Mart measurement facts table
###Code
client.data_mart.get_deployment_metrics()
###Output
_____no_output_____
###Markdown
Working with Azure Machine Learning Studio engine This notebook shows how to log the payload for a model deployed on Microsoft Azure serving engine using Watson OpenScale python sdk. Contents- [1. Setup](setup)- [2. Binding machine learning engine](binding)- [3. Subscriptions](subscription)- [4. Scoring and payload logging](scoring)- [5. Feedback logging](feedback)- [6. Data Mart](datamart) 1. Setup 1.0 Sample model creation using [Azure Machine Learning Studio](https://studio.azureml.net) - Download training data set from [here](https://github.com/pmservice/wml-sample-models/raw/master/spark/product-line-prediction/data/GoSales_Tx.csv)- [Create an experiment in Azure ML Studio](https://docs.microsoft.com/en-us/azure/machine-learning/studio/create-experiment) using the diagram below. (You can search for each module in the palette by name)- When you get to the `Train Model` module, select the `Product Line` column as the label.- Run the experiment to train the model.- [Create (deploy) web service](https://docs.microsoft.com/en-us/azure/machine-learning/studio/publish-a-machine-learning-web-service) (Choose the `new` NOT `classic`) **NOTE:** Classic web services are not supported. 1.1 Installation and authentication
###Code
!pip install ibm-ai-openscale==1.0.429 --no-cache | tail -n 1
###Output
Successfully installed ibm-ai-openscale-1.0.456
###Markdown
Import and initiate.
###Code
from ibm_ai_openscale import APIClient
from ibm_ai_openscale.supporting_classes import PayloadRecord
from ibm_ai_openscale.supporting_classes.enums import InputDataType, ProblemType
from ibm_ai_openscale.engines import *
from ibm_ai_openscale.utils import *
###Output
_____no_output_____
###Markdown
ACTION: Get Watson OpenScale `instance_guid` and `apikey`[Install IBM Cloud (bluemix) console](https://console.bluemix.net/docs/cli/reference/ibmcloud/download_cli.htmlinstall_use)Use the IBM Cloud CLI to get an api key:```bashibmcloud login --ssoibmcloud iam api-key-create 'my_key'```Get your Watson OpenScale instance GUID:> if your resource group is different than `default` switch to the resource group containing Watson OpenScale instance```bashibmcloud target -g ```Get details of the instance:```bashibmcloud resource service-instance `Watson-OpenScale-instance_name```` Let's define some constants required to set up data mart:- WATSON_OS_CREDENTIALS- POSTGRES_CREDENTIALS- SCHEMA_NAME
###Code
WATSON_OS_CREDENTIALS = {
"url": "https://api.aiopenscale.cloud.ibm.com",
"instance_guid": "****",
"apikey": "****"
}
POSTGRES_CREDENTIALS = {
"db_type": "postgresql",
"uri_cli_1": "xxx",
"maps": [],
"instance_administration_api": {
"instance_id": "xxx",
"root": "xxx",
"deployment_id": "xxx"
},
"name": "xxx",
"uri_cli": "xxx",
"uri_direct_1": "xxx",
"ca_certificate_base64": "xxx",
"deployment_id": "xxx",
"uri": "xxx"
}
SCHEMA_NAME = 'data_mart_for_azure'
###Output
_____no_output_____
###Markdown
Create schema for data mart.
###Code
create_postgres_schema(postgres_credentials=POSTGRES_CREDENTIALS, schema_name=SCHEMA_NAME)
client = APIClient(WATSON_OS_CREDENTIALS)
client.version
###Output
_____no_output_____
###Markdown
1.2 DataMart setup
###Code
client.data_mart.setup(db_credentials=POSTGRES_CREDENTIALS, schema=SCHEMA_NAME)
data_mart_details = client.data_mart.get_details()
###Output
_____no_output_____
###Markdown
2. Bind machine learning engines 2.1 Bind `Azure` machine learning engineProvide credentials using following fields:- `client_id`- `client_secret`- `subscription_id`- `tenant`
###Code
AZURE_ENGINE_CREDENTIALS = {
"client_id": "***",
"client_secret": "***",
"subscription_id": "***",
"tenant": "***"
}
binding_uid = client.data_mart.bindings.add('My Azure ML Studio engine', AzureMachineLearningInstance(AZURE_ENGINE_CREDENTIALS))
bindings_details = client.data_mart.bindings.get_details()
client.data_mart.bindings.list()
###Output
_____no_output_____
###Markdown
3. Subscriptions 3.1 Add subscriptions List available deployments.**Note:** Depending on the number of assets it may take some time.
###Code
client.data_mart.bindings.list_assets()
###Output
_____no_output_____
###Markdown
**Action:** Assign your source_uid to `source_uid` variable below.
###Code
source_uid = '986fd3e779b52d0e23a2bde5b6da996c'
subscription = client.data_mart.subscriptions.add(
AzureMachineLearningAsset(source_uid=source_uid,
binding_uid=binding_uid,
input_data_type=InputDataType.STRUCTURED,
problem_type=ProblemType.MULTICLASS_CLASSIFICATION,
label_column='PRODUCT_LINE',
prediction_column='Scored Labels'))
###Output
_____no_output_____
###Markdown
Get subscriptions list
###Code
subscriptions = client.data_mart.subscriptions.get_details()
subscriptions_uids = client.data_mart.subscriptions.get_uids()
print(subscriptions_uids)
###Output
['986fd3e779b52d0e23a2bde5b6da996c']
###Markdown
List subscriptions
###Code
client.data_mart.subscriptions.list()
###Output
_____no_output_____
###Markdown
4. Scoring and payload logging 4.1 Score the product line model and measure response time
###Code
import requests
import time
import json
subscription_details = subscription.get_details()
scoring_url = subscription_details['entity']['deployments'][0]['scoring_endpoint']['url']
data = {
"Inputs": {
"input1":
[
{
'GENDER': "F",
'AGE': 27,
'MARITAL_STATUS': "Single",
'PROFESSION': "Professional",
'PRODUCT_LINE': "Personal Accessories",
}
],
},
"GlobalParameters": {
}
}
body = str.encode(json.dumps(data))
token = subscription_details['entity']['deployments'][0]['scoring_endpoint']['credentials']['token']
headers = subscription_details['entity']['deployments'][0]['scoring_endpoint']['request_headers']
headers['Authorization'] = ('Bearer ' + token)
start_time = time.time()
response = requests.post(url=scoring_url, data=body, headers=headers)
response_time = int(time.time() - start_time)*1000
result = response.json()
print(json.dumps(result, indent=2))
###Output
{
"Results": {
"output1": [
{
"GENDER": "F",
"AGE": "27",
"MARITAL_STATUS": "Single",
"PROFESSION": "Professional",
"PRODUCT_LINE": "Personal Accessories",
"Scored Probabilities for Class \"Camping Equipment\"": "0",
"Scored Probabilities for Class \"Golf Equipment\"": "0",
"Scored Probabilities for Class \"Mountaineering Equipment\"": "0.0570687164231906",
"Scored Probabilities for Class \"Outdoor Protection\"": "0",
"Scored Probabilities for Class \"Personal Accessories\"": "0.942931283576809",
"Scored Labels": "Personal Accessories"
}
]
}
}
###Markdown
4.2 Store the request and response in payload logging table Transform the model's input and output to the format compatible with Watson OpenScale standard.
###Code
request_data = {'fields': list(data['Inputs']['input1'][0]),
'values': [list(x.values()) for x in data['Inputs']['input1']]}
response_data = {'fields': list(result['Results']['output1'][0]),
'values': [list(x.values()) for x in result['Results']['output1']]}
###Output
_____no_output_____
###Markdown
Store the payload using Python SDK **Hint:** You can embed payload logging code into your custom deployment so it is logged automatically each time you score the model.
###Code
records_list = [PayloadRecord(request=request_data, response=response_data, response_time=response_time),
PayloadRecord(request=request_data, response=response_data, response_time=response_time)]
for i in range(1, 10):
records_list.append(PayloadRecord(request=request_data, response=response_data, response_time=response_time))
subscription.payload_logging.store(records=records_list)
###Output
_____no_output_____
###Markdown
Store the payload using REST API Get the token first.
###Code
token_endpoint = "https://iam.bluemix.net/identity/token"
headers = {
"Content-Type": "application/x-www-form-urlencoded",
"Accept": "application/json"
}
data = {
"grant_type":"urn:ibm:params:oauth:grant-type:apikey",
"apikey":WATSON_OS_CREDENTIALS["apikey"]
}
req = requests.post(token_endpoint, data=data, headers=headers)
token = req.json()['access_token']
###Output
_____no_output_____
###Markdown
Store the payload.
###Code
import requests, uuid
PAYLOAD_STORING_HREF_PATTERN = '{}/v1/data_marts/{}/scoring_payloads'
endpoint = PAYLOAD_STORING_HREF_PATTERN.format(WATSON_OS_CREDENTIALS['url'], WATSON_OS_CREDENTIALS['data_mart_id'])
payload = [{
'binding_id': binding_uid,
'deployment_id': subscription.get_details()['entity']['deployments'][0]['deployment_id'],
'subscription_id': subscription.uid,
'scoring_id': str(uuid.uuid4()),
'response': response_data,
'request': request_data
}]
headers = {"Authorization": "Bearer " + token}
req_response = requests.post(endpoint, json=payload, headers = headers)
print("Request OK: " + str(req_response.ok))
###Output
Request OK: True
###Markdown
5. Feedback logging & quality (accuracy) monitoring Enable quality monitoring You need to provide the monitoring `threshold` and `min_records` (minimal number of feedback records).
###Code
subscription.quality_monitoring.enable(threshold=0.7, min_records=10)
###Output
_____no_output_____
###Markdown
Feedback records logging Feedback records are used to evaluate your model. The predicted values are compared to real values (feedback records). You can check the schema of feedback table using the below method.
###Code
subscription.feedback_logging.print_table_schema()
###Output
_____no_output_____
###Markdown
The feedback records can be sent to the feedback table using the code below.
###Code
fields = ["GENDER", "AGE", "MARITAL_STATUS", "PROFESSION", "PRODUCT_LINE"]
records = [
["F", "27", "Single", "Professional", "Personal Accessories"],
["M", "27", "Single", "Professional", "Personal Accessories"]]
for i in range(1,10):
records.append(["F", "27", "Single", "Professional", "Personal Accessories"])
subscription.feedback_logging.store(feedback_data=records, fields=fields)
###Output
_____no_output_____
###Markdown
Run quality monitoring on demand By default, quality monitoring is run on hourly schedule. You can also trigger it on demand using the code below.
###Code
run_details = subscription.quality_monitoring.run()
###Output
_____no_output_____
###Markdown
Since the monitoring runs in the background you can use method below to check the status of the job.
###Code
status = run_details['status']
id = run_details['id']
print("Run status: {}".format(status))
start_time = time.time()
elapsed_time = 0
while status != 'completed' and elapsed_time < 60:
time.sleep(10)
run_details = subscription.quality_monitoring.get_run_details(run_uid=id)
status = run_details['status']
elapsed_time = time.time() - start_time
print("Run status: {}".format(status))
###Output
Run status: initializing
Run status: completed
###Markdown
Show the quality metrics
###Code
subscription.quality_monitoring.show_table()
###Output
_____no_output_____
###Markdown
Get all calculated metrics.
###Code
deployment_uids = subscription.get_deployment_uids()
subscription.quality_monitoring.get_metrics(deployment_uid=deployment_uids[0])
###Output
_____no_output_____
###Markdown
6. Get the logged data 6.1 Payload logging Print schema of payload_logging table
###Code
subscription.payload_logging.print_table_schema()
###Output
_____no_output_____
###Markdown
Show (preview) the table
###Code
subscription.payload_logging.describe_table()
###Output
AGE
count 8.0
mean 27.0
std 0.0
min 27.0
25% 27.0
50% 27.0
75% 27.0
max 27.0
###Markdown
Return the table content as pandas dataframe
###Code
pandas_df = subscription.payload_logging.get_table_content(format='pandas')
###Output
_____no_output_____
###Markdown
6.2 Feddback logging Check the schema of table.
###Code
subscription.feedback_logging.print_table_schema()
###Output
_____no_output_____ |
examples/AmmoniaLevelPopulation.ipynb | ###Markdown
Notes on the Ammonia model Exact equation from the [source code](https://github.com/pyspeckit/pyspeckit/blob/master/pyspeckit/spectrum/models/ammonia.pyL210): population_upperstate = lin_ntot * orthoparafrac * partition/(Z.sum()) tau_dict[linename] = (population_upperstate / (1. + np.exp(-h*frq/(kb*tkin) ))*ccms**2 / (8*np.pi*frq**2) * aval * (1-np.exp(-h*frq/(kb*tex))) / (width/ckms*frq*np.sqrt(2*np.pi)) ) \begin{equation} \tau = N_{tot} g_{opr} Z_{upper} \frac{A_{ij} c^2}{8\pi\nu^2} \left(1-\exp{ \frac{-h \nu}{k_B T_{ex}} } \right) \left(1+\exp{\frac{-h \nu}{k_B T_K}}\right) \left((2\pi)^{1/2} \nu \sigma_\nu / c\right)^{-1}\end{equation} Equation 16 from Rosolowsky et al 2008:$$N(1,1) = \frac{8 \pi k \nu_0^2}{h c^3} \frac{1}{A_{1,1}} \sqrt{2\pi}\sigma_\nu (T_{ex}-T_{bg})\tau$$ Rearranges to:$$\tau = N(1,1) \frac{h c^3}{8\pi k \nu_0^2} A_{1,1} \frac{1}{\sqrt{2 \pi} \sigma_\nu} \left(T_{ex}-T_{bg}\right)^{-1}$$ Equation A4 of Friesen et al 2009:$$N(1,1) = \frac{8\pi\nu^2}{c^2} \frac{g_1}{g_2} \frac{1}{A_{1,1}} \frac{1+\exp\left(-h\nu_0/k_B T_{ex}\right)}{1-\exp\left(-h \nu_0/k_B T_{ex}\right)} \int \tau(\nu) d\nu$$ Equation 98 of Mangum & Shirley 2015:$$N_{tot} = \frac{3 h}{8 \pi \mu^2 R_i} \frac{J_u(J_u+1)}{K^2} \frac{Q_{rot}}{g_J g_K g_I} \frac{\exp{E_u/k_B T_{ex}}}{\exp{h \nu/k_B T_{ex}} - 1} \left[\frac{\int T_R dv}{f\left(J_\nu(T_{ex})-J_\nu{T_B}\right) }\right]$$ From Scratch $$\tau_\nu = \int \alpha_\nu ds$$$$\alpha_\nu = \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left(1-\frac{g_l n_u}{g_u n_l}\right) \phi_\nu$$ Excitation temperature:$$T_{ex} \equiv \frac{h\nu_0/k_b}{\ln \frac{n_l g_u}{n_u g_l} } $$$\nu_0$ = rest frequency of the lineRearranges to:$$ \frac{n_l g_u}{n_u g_l} = \exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)$$ Boltzman distribution:$$ \frac{n_u}{n_l} = \frac{g_u}{g_l} \exp\left(\frac{-h \nu_0}{k_B T}\right)$$where T is a thermal equilibrium temperature Rearranges to:$$ 1-\frac{n_u g_l}{n_l g_u} = 1-\exp\left(\frac{-h \nu_0}{k_B T}\right)$$ Column Density $$N_u \equiv \int n_u ds$$$$N_l \equiv \int n_l ds$$ Starting to substitute previous equations into each other:$$\tau_\nu d\nu= \alpha_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left(1-\frac{g_l n_u}{g_u n_l}\right) \phi_\nu d\nu$$$$\frac{g_u}{g_l}N_l = N_u\exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)$$ First substitution is the Boltzmann distribution, with $T_{ex}$ for T$$\int \tau_\nu d\nu = \int \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right] \phi_\nu d\nu $$ Second is the $N_l$ - $N_u$ relation:$$\int \tau_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} A_{ul} N_u\left[\exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)\right] \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T}\right) \right] \int \phi_\nu d\nu $$ Then some simplification:$$\int \tau_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} A_{ul} N_u \left[ \exp\left(\frac{h \nu_0}{k_B T}\right) - 1 \right] \int \phi_\nu d\nu $$ $$A_{ul} = \frac{64\pi^4\nu_0^3}{3 h c^3} \left|\mu_{lu}\right|^2$$ Becomes, via some manipulation, equation 29 of Mangum & Shirley 2015:$$N_u = \frac{3 h c}{8\pi^3 \nu \left|\mu_{lu}\right|^2} \left[\exp\left(\frac{h\nu}{k_B T_{ex}}\right) -1\right]^{-1} \int \tau_\nu d\nu$$where I have used $T_{ex}$ instead of $T$ here because that is one of the substitutions invoked (quietly) in their derivation. There is some sleight-of-hand regarding assuming $N_l = n_l$ that essentially assumes $T_{ex}$ is constant along the line of sight, but that is fine.(Equation 30 is the same as this one, but with $dv$ instead of $d\nu$ units) Solve for tau again (because that's what's implemented in the code):$$\mathrm{"tau"} = \int \tau_\nu d\nu = N_u \frac{c^2 A_{ul}}{8\pi\nu_0^2} \left[\exp\left(\frac{h\nu}{k_B T_{ex}}\right) -1\right] $$ The key difference from Erik's derivation is that this is $N_u$, but he has defined $N_{(1,1)}= N_u + N_l$. So, we get $N_l$ the same way as above:$$N_l = \frac{8\pi\nu_0^2}{c^2} \frac{g_l}{g_u} A_{ul}^{-1} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right]^{-1} \int \tau d\nu$$ $$N_l = \frac{3 h c}{8 \pi^3 \nu \left|\mu_{lu}\right|^2} \frac{g_l}{g_u} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right]^{-1} \int \tau d\nu$$ Added together:$$N_u + N_l = \frac{3 h c}{8 \pi^3 \nu \left|\mu_{lu}\right|^2} \frac{\frac{g_l}{g_u} +\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)}{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} \int \tau d\nu$$ We can solve that back for tau, which is what Erik has done:$$\int \tau d\nu = (N_u + N_l) \frac{8 \pi^3 \nu \left|\mu_{lu}\right|^2}{3 h c} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {\frac{g_l}{g_u} +\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$$$=(N_u + N_l) \frac{g_u}{g_l}\frac{8 \pi^3 \nu \left|\mu_{lu}\right|^2}{3 h c} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$ $$=(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$ now identical to Erik's equation. This is actually a problem, because $N_u$ is related to $N_{tot}$ via the partition function, but there is some double-counting going on if we try to relate $N_{(1,1)}$ to $N_{tot}$ with the same equation. So, to reformulate the equations in pyspeckit using the appropriate values, we want to use both the partition function (calculated using $T_{kin}$) and $N_u$. Eqn 31: $$N_u = N_{tot} \frac{g_u}{Q_{rot}} \exp\left(\frac{-E_u}{k_B T_{kin}}\right)$$ is implemented correctly in pyspeckit: population_upperstate = lin_ntot * orthoparafrac * partition/(Z.sum())where ``partition`` is $$Z_i(\mathrm{para}) = (2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{kin}}\right]$$$$Z_i(\mathrm{ortho}) = 2(2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{kin}}\right]$$...so I'm assuming (haven't checked) that $E_u = h (B_0 J (J+1) + (C_0-B_0)J^2)$ Note that the leading "2" above cancels out in the Z/sum(Z), so it doesn't matter if it's right or not. I suspect, though, that the 2 belongs in front of both the para and ortho states, but it should be excluded for the J=0 case. An aside by Erik Rosolowsky (Note May 16, 2018: I believe this was incorporated into the above analysis)EWR: The above equation is problematic because it relates the total column density to the $(J,J)$ state which is the equivalent of the $N_{(1,1)}$ term. In the notation above $N_{(1,1)} = N_u + N_l$, so to get this right, you need to consider the inversion transition splitting on top of the total energy of the state so that $$ E_u = h (B_0 J (J+1) + (C_0-B_0)J^2) + \Delta E_{\mathrm{inv}}, g_u = 1 $$ and $$ E_l = h (B_0 J (J+1) + (C_0-B_0)J^2) - \Delta E_{\mathrm{inv}}, g_l = 1 $$ or, since the splitting is small compared to the rotational energy (1 K compared to > 20 K), then$$Z_J \approx 2 (2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{\mathrm{rot}}}\right]$$where the leading 2 accounts for the internal inversion states. Since this 2 appears in all the terms, it cancels out in the sum. Note that I have also changed the $T_{\mathrm{kin}}$ to $T_{\mathrm{rot}}$ since these two aren't the same and it is the latter which establishes the level populations.Returning to the above, I would then suggest $$N_{(J,J)} = N_{tot} \frac{Z_J}{\sum_j Z_j} $$ Is the treatment of optical depth correct? May 16, 2018: https://github.com/pyspeckit/pyspeckit/blob/725746f517e9bdcc22b83f4f9d6c9b8666e0a99e/pyspeckit/spectrum/models/ammonia.pyIn this version, we [compute the optical depth](https://github.com/pyspeckit/pyspeckit/blob/725746f517e9bdcc22b83f4f9d6c9b8666e0a99e/pyspeckit/spectrum/models/ammonia.pyL318) with the code:``` for kk,nuo in enumerate(nuoff): tauprof_ = (tau_dict[linename] * tau_wts[kk] * np.exp(-(xarr.value+nuo-lines[kk])**2 / (2.0*nuwidth[kk]**2))) if return_components: components.append(tauprof_) tauprof += tauprof_``` The total tau is normalized such that $\Sigma(\tau_{hf})_{hf} = \tau_{tot}$ for each line, i.e., the hyperfine $\tau$s sum to the tau value specified for the line.The question Nico raised is, should we be computing the synthetic spectrum as $1-e^{\Sigma(\tau_{hf,\nu})}$ or $\Sigma(1-e^{\tau_{hf,\nu}})$?The former is correct: we only have one optical depth per frequency bin. It doesn't matter what line the optical depth comes from.
###Code
# This is a test to show what happens if you add lines vs. computing a single optical depth per channel
from pyspeckit.spectrum.models.ammonia_constants import (line_names, freq_dict, aval_dict, ortho_dict,
voff_lines_dict, tau_wts_dict)
from astropy import constants
from astropy import units as u
import pylab as pl
linename = 'oneone'
xarr_v = (np.linspace(-25,25,1000)*u.km/u.s)
xarr = xarr_v.to(u.GHz, u.doppler_radio(freq_dict['oneone']*u.Hz))
tauprof = np.zeros(xarr.size)
true_prof = np.zeros(xarr.size)
width = 0.1
xoff_v = 0
ckms = constants.c.to(u.km/u.s).value
pl.figure(figsize=(12,12))
pl.clf()
for ii,tau_tot in enumerate((0.001, 0.1, 1, 10,)):
tau_dict = {'oneone':tau_tot}
voff_lines = np.array(voff_lines_dict[linename])
tau_wts = np.array(tau_wts_dict[linename])
lines = (1-voff_lines/ckms)*freq_dict[linename]/1e9
tau_wts = tau_wts / (tau_wts).sum()
nuwidth = np.abs(width/ckms*lines)
nuoff = xoff_v/ckms*lines
# tau array
tauprof = np.zeros(len(xarr))
for kk,nuo in enumerate(nuoff):
tauprof_ = (tau_dict[linename] * tau_wts[kk] *
np.exp(-(xarr.value+nuo-lines[kk])**2 /
(2.0*nuwidth[kk]**2)))
tauprof += tauprof_
true_prof += (1-np.exp(-tauprof_))
ax = pl.subplot(4,1,ii+1)
ax.plot(xarr_v, 1 - np.exp(-tauprof), label=str(tau_tot), zorder=20, linewidth=1)
ax.plot(xarr_v, true_prof, label=str(tau_tot), alpha=0.7, linewidth=2)
ax.plot(xarr_v, true_prof-(1-np.exp(-tauprof)) - tau_tot/20, linewidth=1)
pl.title(str(tau_tot))
###Output
_____no_output_____
###Markdown
Below are numerical checks for accuracy Some numerical checks: How bad was the use of Tkin instead of Tex in the $\tau$ equation? $$(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$
###Code
from astropy import units as u
from astropy import constants
freq = 23*u.GHz
def tau_wrong(tkin, tex):
return (1-np.exp(-constants.h * freq/(constants.k_B*tkin)))/(1+np.exp(-constants.h * freq/(constants.k_B*tex)))
def tau_right(tex):
return (1-np.exp(-constants.h * freq/(constants.k_B*tex)))/(1+np.exp(-constants.h * freq/(constants.k_B*tex)))
tkin = np.linspace(5,40,101)*u.K
tex = np.linspace(5,40,100)*u.K
grid = np.array([[tau_wrong(tk,tx)/tau_right(tx) for tx in tex] for tk in tkin])
%matplotlib inline
import pylab as pl
pl.imshow(grid, cmap='hot', extent=[5,40,5,40])
pl.xlabel("Tex")
pl.ylabel("Tkin")
pl.colorbar()
pl.contour(tex, tkin, grid, levels=[0.75,1,1/0.75], colors=['w','w','k'])
###Output
_____no_output_____
###Markdown
So the error could be 50%-700% over a somewhat reasonable range. That's bad, and it affects the temperature estimates. However, the effect on temperature estimates should be pretty small, since each line will be affected in the same way. The biggest effect will be on the column density. But, is this error at all balanced by the double-counting problem?Because we were using the partition function directly, it's not obvious. I was assuming that we were using the equation with $N_u$ as the leader, but we were using $N_u+N_l$. i.e., I was using this equation:$$\int \tau d\nu =(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$but with $N_u$ in place of $N_u + N_l$. The magnitude of the error can therefore be estimated by computing $(N_u+N_l)/N_u = 1 + \frac{N_l}{N_u}$. We can use the Boltzmann distribution to compute this error, then:$$ \frac{n_u}{n_l} = \frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T}\right)$$
###Code
def nunlnu_error(Tkin):
return 1+np.exp(-constants.h * freq / (constants.k_B * Tkin))
pl.plot(tkin.value, nunlnu_error(tkin))
###Output
_____no_output_____
###Markdown
So we were always off by a factor very close to 2. The *relative* values of $\tau$ should never have been affected by this issue. It will be more work to determine exactly how much the T_K and column estimates were affected. New work in May 2016: T_{rot} Comparing Trot and Tkin. If we start with the equation that governs level populations,$$N_u = N_{tot} \frac{g_u}{Q_{rot}} \exp\left(\frac{-E_u}{k_B T_{kin}}\right)$$we get $$N_u / N_l = \frac{g_u}{g_l} \exp\left(\frac{-E_u}{k_B T_{kin}} + \frac{E_l}{k_B T_{kin}}\right)$$where we really mean $T_{rot}$ instead of $T_{kin}$ here as long as we're talking about just two levels. This gives us a definition $$T_{rot} = \left(\frac{E_l-E_u}{k_B}\right)\left[\ln\left(\frac{N_u g_l}{N_l g_u}\right)\right]^{-1}$$which is the rotational temperature for a two-level system... which is just a $T_{ex}$, but governing non-radiatively-coupled levels. So, for example, if we want to know $T_{rot}$ for the 2-2 and 1-1 lines at $n=10^4$ and $T_{kin}=20$ K:
###Code
from pyradex import Radex
from astropy import constants, units as u
R = Radex(species='p-nh3', column=1e13, collider_densities={'pH2':1e4}, temperature=20)
tbl = R(collider_densities={'ph2': 1e4}, temperature=20, column=1e13)
tbl[8:10]
# we're comparing the upper states since these are the ones that are emitting photons
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
trot
tbl['Tex'][8:10].mean()
###Output
_____no_output_____
###Markdown
Pause here $T_{rot} = 60$ K for $T_{kin}=25$ K? That doesn't seem right. Is it possible RADEX is doing something funny with level populations? ERIK I SOLVED IT I had left out the $^{-1}$ in the code. Oops!
###Code
dT_oneone = -(constants.h * u.Quantity(tbl['frequency'][8], u.GHz)/constants.k_B).to(u.K)
print("delta-T for 1-1_upper - 1-1_lower: {0}".format(dT_oneone))
tex = (dT_oneone *
np.log((tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[8]) /
(tbl['lowerlevelpop'][8] * R.upperlevel_statisticalweight[8]))**-1
)
print("Excitation temperature computed is {0} and should be {1}".format(tex.to(u.K), tbl['Tex'][8]))
###Output
delta-T for 1-1_upper - 1-1_lower: -1.1371568105216585 K
Excitation temperature computed is 6.789343356695167 K and should be 6.789360524825584
###Markdown
Moving on: comparison to Swift et al 2005 Swift et al 2005 eqn A6$$T_R = T_K \left[ 1 + \frac{T_K}{T_0} \ln \left[1+0.6\exp\left( -15.7/T_K \right)\right] \right]^{-1}$$where $T_0=41.18$ K
###Code
T0=tbl['upperstateenergy'][9]-tbl['upperstateenergy'][8]
T0
def tr_swift(tk, T0=T0):
return tk*(1+tk/T0 * np.log(1+0.6*np.exp(-15.7/tk)))**-1
###Output
_____no_output_____
###Markdown
Note that the approximation "works" - gets something near 20 - for positive or negative values of T0 (but see below)
###Code
tr_swift(20, T0=-41.18)
tr_swift(20, T0=41.18)
tr_swift(20, T0=41.5)
def trot_radex(column=1e13, density=1e4, tkin=20):
tbl = R(collider_densities={'ph2': density}, temperature=tkin, column=column)
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
return trot
###Output
_____no_output_____
###Markdown
RADEX suggests that the *positive* T0 value is the correct one (the negative one appeared correct when incorrectly indexed statistical weights were being used)
###Code
trot_radex(tkin=20)
def tex_radex(column=1e13, density=1e4, tkin=20, lineno=8):
""" used in tests below """
tbl = R(collider_densities={'ph2': density}, temperature=tkin, column=column)
return tbl[lineno]['Tex']
%matplotlib inline
import pylab as pl
cols = np.logspace(12,15)
trots = [trot_radex(column=c).to(u.K).value for c in cols]
pl.semilogx(cols, trots)
pl.hlines(tr_swift(20), cols.min(), cols.max(), color='k')
pl.xlabel("Column")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
densities = np.logspace(3,9)
trots = [trot_radex(density=n).to(u.K).value for n in densities]
pl.semilogx(densities, trots)
pl.hlines(tr_swift(20), densities.min(), densities.max(), color='k')
pl.xlabel("Volume Density")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
###Output
invalid value encountered in true_divide
###Markdown
This is the plot that really convinces me that the negative (black curve) value of T0 is the appropriate value to use for this approximation
###Code
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
pl.plot(temperatures, trots)
# wrong pl.plot(temperatures, tr_swift(temperatures, T0=-41.18), color='k')
pl.plot(temperatures, tr_swift(temperatures, T0=41.18), color='r')
pl.xlabel("Temperatures")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
temperatures = np.linspace(5,40,50)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
pl.plot(temperatures, np.abs(trots-tr_swift(temperatures, T0=41.18))/trots)
pl.xlabel("Temperatures")
pl.ylabel("$(T_{rot}(\mathrm{RADEX}) - T_{rot}(\mathrm{Swift}))/T_{rot}(\mathrm{RADEX})$")
###Output
invalid value encountered in true_divide
###Markdown
Tests of cold_ammonia reproducing pyspeckit ammonia spectra
###Code
from pyspeckit.spectrum.models.tests import test_ammonia
from pyspeckit.spectrum.models import ammonia
###Output
_____no_output_____
###Markdown
Test 1: Use a constant excitatino temperature for all lines
###Code
tkin = 20*u.K
trot = trot_radex(tkin=tkin)
print(trot)
spc = test_ammonia.make_synthspec(lte=False, tkin=None, tex=6.66, trot=trot.value, lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
print("For Tkin={1} -> Trot={2}, pyspeckit's cold_ammonia fitter got:\n{0}".format(spc.specfit.parinfo, tkin, trot))
spc.specfit(fittype='cold_ammonia', guesses=[22.80, 6.6, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
bestfit_coldammonia_temperature = spc.specfit.parinfo[0]
print("The best fit cold ammonia temperature is {0} for an input T_rot={1}".format(bestfit_coldammonia_temperature, trot))
###Output
INFO: Left region selection unchanged. xminpix, xmaxpix: 0,502 [pyspeckit.spectrum.interactive]
The best fit cold ammonia temperature is Param #0 tkin0 = 19.8365 +/- 5.72747e-05 Range:[2.7315,inf) for an input T_rot=17.776914063182385 K
###Markdown
Test 2: Use a different (& appropriate) tex for each level in the input model spectrum If we use the exact tex for each line in the input model, in principle, the resulting fitted temperature should be more accurate. However, at present, it looks dramatically incorrect
###Code
tex11 = tex_radex(tkin=tkin, lineno=8)
tex22 = tex_radex(tkin=tkin, lineno=9)
print("tex11={0}, tex22={1} for tkin={2}, trot={3}".format(tex11,tex22,tkin,trot))
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
print("For Tkin={1} -> Trot={2}, pyspeckit's cold_ammonia fitter got:\n{0}"
.format(spc.specfit.parinfo, tkin, trot))
print("The best fit cold ammonia temperature is {0} for an input T_rot={1}"
.format(bestfit_coldammonia_temperature, trot))
###Output
INFO: Creating spectra [pyspeckit.spectrum.classes]
INFO: Concatenating data [pyspeckit.spectrum.classes]
INFO: Left region selection unchanged. xminpix, xmaxpix: 0,502 [pyspeckit.spectrum.interactive]
###Markdown
Test 3: compare cold_ammonia to "normal" ammonia model to see why they differ In a previous iteration of the ammonia model, there was a big (and incorrect) difference between the synthetic spectra from ammonia and cold_ammonia. This is now something of a regression test for that error, which turned out to be from yet another incorrect indexing of the degeneracy.
###Code
tkin = 20*u.K
trot = trot_radex(tkin=tkin)
dT0=41.18
print(tkin * (1 + (tkin.value/dT0)*np.log(1 + 0.6*np.exp(-15.7/tkin.value)))**-1)
print("tkin={0} trot={1} tex11={2} tex22={3}".format(tkin, trot, tex11, tex22))
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc_666 = test_ammonia.make_synthspec(lte=False, tkin=None,
tex=6.66,
trot=trot.value,
lines=['oneone','twotwo'])
# this one is guaranteed different because tex = trot
spc_cold = test_ammonia.make_synthspec_cold(tkin=tkin.value,
lines=['oneone','twotwo'])
spc[0].plotter(linewidth=3, alpha=0.5)
spc_666[0].plotter(axis=spc[0].plotter.axis, clear=False, color='r', linewidth=1, alpha=0.7)
spc_cold[0].plotter(axis=spc[0].plotter.axis, clear=False, color='b', linewidth=1, alpha=0.7)
###Output
_____no_output_____
###Markdown
The red and black look too different to me; they should differ only by a factor of (tex11-6.66)/6.66 or so. Instead, they differ by a factor of 5-6.
###Code
spc[0].data.max(), spc_666[0].data.max()
spc[1].plotter()
spc_666[1].plotter(axis=spc[1].plotter.axis, clear=False, color='r')
spc_cold[1].plotter(axis=spc[1].plotter.axis, clear=False, color='b')
###Output
_____no_output_____
###Markdown
RADEX analysis: T_rot vs T_kin vs T_ex
###Code
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
tex11s = np.array([tex_radex(tkin=t, lineno=8) for t in temperatures])
tex22s = np.array([tex_radex(tkin=t, lineno=9) for t in temperatures])
pl.plot(trots, tex11s)
pl.plot(trots, tex22s)
#pl.plot(tr_swift(temperatures), color='k')
pl.ylabel("$T_{ex}$")
pl.xlabel("$T_{rot} (2-2)/(1-1)$")
###Output
invalid value encountered in true_divide
###Markdown
Apparently there are some discreteness problems but the ratio changes very little.
###Code
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
tex11s = np.array([tex_radex(tkin=t, lineno=8) for t in temperatures])
tex22s = np.array([tex_radex(tkin=t, lineno=9) for t in temperatures])
pl.plot(trots, tex11s/tex22s)
#pl.plot(tr_swift(temperatures), color='k')
pl.ylabel("$T_{ex} (2-2)/(1-1)$")
pl.xlabel("$T_{rot} (2-2)/(1-1)$")
###Output
invalid value encountered in true_divide
###Markdown
run pyspeckit tests
###Code
from pyspeckit.spectrum.models.tests import test_ammonia
test_ammonia.test_ammonia_parlimits()
test_ammonia.test_ammonia_parlimits_fails()
test_ammonia.test_cold_ammonia()
test_ammonia.test_self_fit()
###Output
WARNING: No header given. Creating an empty one.
###Markdown
More extensive (& expensive) tests: recovered Tkin 1. Check the recovered temperature as a function of input temperature using RADEX to simulate "real" data
###Code
temperatures = np.array((10,15,20,25,30,35,40))
recovered_tkin = {}
recovered_column = {}
for tkin in temperatures:
tbl = R(collider_densities={'ph2': 1e4}, temperature=tkin, column=1e13)
tex11 = tbl['Tex'][8]
tex22 = tbl['Tex'][9]
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
recovered_tkin[tkin] = spc.specfit.parinfo['tkin0'].value
recovered_column[tkin] = spc.specfit.parinfo['ntot0'].value
pl.xlabel("$T_K$")
pl.ylabel("Fitted $T_K$ from cold_ammonia")
pl.plot(recovered_tkin.keys(), recovered_tkin.values(), 'o')
pl.plot(temperatures, temperatures)
pl.xlabel("$T_K$")
pl.ylabel("$|T_K-T_{fit}|/T_K$")
inp = np.array(list(recovered_tkin.keys()), dtype='float')
rslt = np.array(list(recovered_tkin.values()), dtype='float')
pl.plot(inp, np.abs(rslt-inp)/rslt, 'o')
###Output
_____no_output_____
###Markdown
2. Check the recovery as a function of column density
###Code
pl.xlabel("$N(NH_3)$")
pl.ylabel("Fitted $N(NH_3)$ from cold_ammonia")
pl.plot(recovered_column.keys(), recovered_column.values(), 'o')
pl.plot(temperatures, temperatures*0+13)
###Output
_____no_output_____
###Markdown
Notes on the Ammonia model Exact equation from the [source code](https://github.com/pyspeckit/pyspeckit/blob/master/pyspeckit/spectrum/models/ammonia.pyL210): population_upperstate = lin_ntot * orthoparafrac * partition/(Z.sum()) tau_dict[linename] = (population_upperstate / (1. + np.exp(-h*frq/(kb*tkin) ))*ccms**2 / (8*np.pi*frq**2) * aval * (1-np.exp(-h*frq/(kb*tex))) / (width/ckms*frq*np.sqrt(2*np.pi)) ) \begin{equation} \tau = N_{tot} g_{opr} Z_{upper} \frac{A_{ij} c^2}{8\pi\nu^2} \left(1-\exp{ \frac{-h \nu}{k_B T_{ex}} } \right) \left(1+\exp{\frac{-h \nu}{k_B T_K}}\right) \left((2\pi)^{1/2} \nu \sigma_\nu / c\right)^{-1}\end{equation} Equation 16 from Rosolowsky et al 2008:$$N(1,1) = \frac{8 \pi k \nu_0^2}{h c^3} \frac{1}{A_{1,1}} \sqrt{2\pi}\sigma_\nu (T_{ex}-T_{bg})\tau$$ Rearranges to:$$\tau = N(1,1) \frac{h c^3}{8\pi k \nu_0^2} A_{1,1} \frac{1}{\sqrt{2 \pi} \sigma_\nu} \left(T_{ex}-T_{bg})\right)^{-1}$$ Equation A4 of Friesen et al 2009:$$N(1,1) = \frac{8\pi\nu^2}{c^2} \frac{g_1}{g_2} \frac{1}{A_{1,1}} \frac{1+\exp\left(-h\nu_0/k_B T_{ex}\right)}{1-\exp\left(-h \nu_0/k_B T_{ex}\right)} \int \tau(\nu) d\nu$$ Equation 98 of Mangum & Shirley 2015:$$N_{tot} = \frac{3 h}{8 \pi \mu^2 R_i} \frac{J_u(J_u+1)}{K^2} \frac{Q_{rot}}{g_J g_K g_I} \frac{\exp{E_u/k_B T_{ex}}}{\exp{h \nu/k_B T_{ex}} - 1} \left[\frac{\int T_R dv}{f\left(J_\nu(T_{ex})-J_\nu{T_B}\right) }\right]$$ From Scratch $$\tau_\nu = \int \alpha_\nu ds$$$$\alpha_\nu = \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left(1-\frac{g_l n_u}{g_u n_l}\right) \phi_\nu$$ Excitation temperature:$$T_{ex} \equiv \frac{h\nu_0/k_b}{\ln \frac{n_l g_u}{n_u g_l} } $$$\nu_0$ = rest frequency of the lineRearranges to:$$ \frac{n_l g_u}{n_u g_l} = \exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)$$ Boltzman distribution:$$ \frac{n_u}{n_l} = \frac{g_u}{g_l} \exp\left(\frac{-h \nu_0}{k_B T}\right)$$where T is a thermal equilibrium temperature Rearranges to:$$ 1-\frac{n_u g_l}{n_l g_u} = 1-\exp\left(\frac{-h \nu_0}{k_B T}\right)$$ Column Density $$N_u \equiv \int n_u ds$$$$N_l \equiv \int n_l ds$$ Starting to substitute previous equations into each other:$$\tau_\nu d\nu= \alpha_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left(1-\frac{g_l n_u}{g_u n_l}\right) \phi_\nu d\nu$$$$\frac{g_u}{g_l}N_l = N_u\exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)$$ First substitution is the Boltzmann distribution, with $T_{ex}$ for T$$\int \tau_\nu d\nu = \int \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right] \phi_\nu d\nu $$ Second is the $N_l$ - $N_u$ relation:$$\int \tau_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} A_{ul} N_u\left[\exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)\right] \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T}\right) \right] \int \phi_\nu d\nu $$ Then some simplification:$$\int \tau_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} A_{ul} N_u \left[ \exp\left(\frac{h \nu_0}{k_B T}\right) - 1 \right] \int \phi_\nu d\nu $$ $$A_{ul} = \frac{64\pi^4\nu_0^3}{3 h c^3} \left|\mu_{lu}\right|^2$$ Becomes, via some manipulation, equation 29 of Mangum & Shirley 2015:$$N_u = \frac{3 h c}{8\pi^3 \nu \left|\mu_{lu}\right|^2} \left[\exp\left(\frac{h\nu}{k_B T_{ex}}\right) -1\right]^{-1} \int \tau_\nu d\nu$$where I have used $T_{ex}$ instead of $T$ here because that is one of the substitutions invoked (quietly) in their derivation. There is some sleight-of-hand regarding assuming $N_l = n_l$ that essentially assumes $T_{ex}$ is constant along the line of sight, but that is fine.(Equation 30 is the same as this one, but with $dv$ instead of $d\nu$ units) Solve for tau again (because that's what's implemented in the code):$$\mathrm{"tau"} = \int \tau_\nu d\nu = N_u \frac{c^2 A_{ul}}{8\pi\nu_0^2} \left[\exp\left(\frac{h\nu}{k_B T_{ex}}\right) -1\right] $$ The key difference from Erik's derivation is that this is $N_u$, but he has defined $N_{(1,1)}= N_u + N_l$. So, we get $N_l$ the same way as above:$$N_l = \frac{8\pi\nu_0^2}{c^2} \frac{g_l}{g_u} A_{ul}^{-1} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right]^{-1} \int \tau d\nu$$ $$N_l = \frac{3 h c}{8 \pi^3 \nu \left|\mu_{lu}\right|^2} \frac{g_l}{g_u} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right]^{-1} \int \tau d\nu$$ Added together:$$N_u + N_l = \frac{3 h c}{8 \pi^3 \nu \left|\mu_{lu}\right|^2} \frac{\frac{g_l}{g_u} +\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)}{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} \int \tau d\nu$$ We can solve that back for tau, which is what Erik has done:$$\int \tau d\nu = (N_u + N_l) \frac{8 \pi^3 \nu \left|\mu_{lu}\right|^2}{3 h c} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {\frac{g_l}{g_u} +\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$$$=(N_u + N_l) \frac{g_u}{g_l}\frac{8 \pi^3 \nu \left|\mu_{lu}\right|^2}{3 h c} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$ $$=(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$ now identical to Erik's equation. This is actually a problem, because $N_u$ is related to $N_{tot}$ via the partition function, but there is some double-counting going on if we try to relate $N_{(1,1)}$ to $N_{tot}$ with the same equation. So, to reformulate the equations in pyspeckit using the appropriate values, we want to use both the partition function (calculated using $T_{kin}$) and $N_u$. Eqn 31: $$N_u = N_{tot} \frac{g_u}{Q_{rot}} \exp\left(\frac{-E_u}{k_B T_{kin}}\right)$$ is implemented correctly in pyspeckit: population_upperstate = lin_ntot * orthoparafrac * partition/(Z.sum())where ``partition`` is $$Z_i(\mathrm{para}) = (2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{kin}}\right]$$$$Z_i(\mathrm{ortho}) = 2(2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{kin}}\right]$$...so I'm assuming (haven't checked) that $E_u = h (B_0 J (J+1) + (C_0-B_0)J^2)$ Note that the leading "2" above cancels out in the Z/sum(Z), so it doesn't matter if it's right or not. I suspect, though, that the 2 belongs in front of both the para and ortho states, but it should be excluded for the J=0 case. (Erik's diff here) EWR: The above equation is problematic because it relates the total column density to the $(J,J)$ state which is the equivalent of the $N_{(1,1)}$ term. In the notation above $N_{(1,1)} = N_u + N_l$, so to get this right, you need to consider the inversion transition splitting on top of the total energy of the state so that $$ E_u = h (B_0 J (J+1) + (C_0-B_0)J^2) + \Delta E_{\mathrm{inv}}, g_u = 1 $$ and $$ E_l = h (B_0 J (J+1) + (C_0-B_0)J^2) - \Delta E_{\mathrm{inv}}, g_l = 1 $$ or, since the splitting is small compared to the rotational energy (1 K compared to > 20 K), then$$Z_J \approx 2 (2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{\mathrm{rot}}}\right]$$where the leading 2 accounts for the internal inversion states. Since this 2 appears in all the terms, it cancels out in the sum. Note that I have also changed the $T_{\mathrm{kin}}$ to $T_{\mathrm{rot}}$ since these two aren't the same and it is the latter which establishes the level populations.Returning to the above, I would then suggest $$N_{(J,J)} = N_{tot} \frac{Z_J}{\sum_j Z_j} $$ Some numerical checks: How bad was the use of Tkin instead of Tex in the $\tau$ equation? $$(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$
###Code
from astropy import units as u
from astropy import constants
freq = 23*u.GHz
def tau_wrong(tkin, tex):
return (1-np.exp(-constants.h * freq/(constants.k_B*tkin)))/(1+np.exp(-constants.h * freq/(constants.k_B*tex)))
def tau_right(tex):
return (1-np.exp(-constants.h * freq/(constants.k_B*tex)))/(1+np.exp(-constants.h * freq/(constants.k_B*tex)))
tkin = np.linspace(5,40,101)*u.K
tex = np.linspace(5,40,100)*u.K
grid = np.array([[tau_wrong(tk,tx)/tau_right(tx) for tx in tex] for tk in tkin])
%matplotlib inline
import pylab as pl
pl.imshow(grid, cmap='hot', extent=[5,40,5,40])
pl.xlabel("Tex")
pl.ylabel("Tkin")
pl.colorbar()
pl.contour(tex, tkin, grid, levels=[0.75,1,1/0.75], colors=['w','w','k'])
###Output
_____no_output_____
###Markdown
So the error could be 50%-700% over a somewhat reasonable range. That's bad, and it affects the temperature estimates. However, the effect on temperature estimates should be pretty small, since each line will be affected in the same way. The biggest effect will be on the column density. But, is this error at all balanced by the double-counting problem?Because we were using the partition function directly, it's not obvious. I was assuming that we were using the equation with $N_u$ as the leader, but we were using $N_u+N_l$. i.e., I was using this equation:$$\int \tau d\nu =(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$but with $N_u$ in place of $N_u + N_l$. The magnitude of the error can therefore be estimated by computing $(N_u+N_l)/N_u = 1 + \frac{N_l}{N_u}$. We can use the Boltzmann distribution to compute this error, then:$$ \frac{n_u}{n_l} = \frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T}\right)$$
###Code
def nunlnu_error(Tkin):
return 1+np.exp(-constants.h * freq / (constants.k_B * Tkin))
pl.plot(tkin.value, nunlnu_error(tkin))
###Output
_____no_output_____
###Markdown
Notes on the Ammonia model Exact equation from the [source code](https://github.com/pyspeckit/pyspeckit/blob/master/pyspeckit/spectrum/models/ammonia.pyL210): population_upperstate = lin_ntot * orthoparafrac * partition/(Z.sum()) tau_dict[linename] = (population_upperstate / (1. + np.exp(-h*frq/(kb*tkin) ))*ccms**2 / (8*np.pi*frq**2) * aval * (1-np.exp(-h*frq/(kb*tex))) / (width/ckms*frq*np.sqrt(2*np.pi)) ) \begin{equation} \tau = N_{tot} g_{opr} Z_{upper} \frac{A_{ij} c^2}{8\pi\nu^2} \left(1-\exp{ \frac{-h \nu}{k_B T_{ex}} } \right) \left(1+\exp{\frac{-h \nu}{k_B T_K}}\right) \left((2\pi)^{1/2} \nu \sigma_\nu / c\right)^{-1}\end{equation} Equation 16 from Rosolowsky et al 2008:$$N(1,1) = \frac{8 \pi k \nu_0^2}{h c^3} \frac{1}{A_{1,1}} \sqrt{2\pi}\sigma_\nu (T_{ex}-T_{bg})\tau$$ Rearranges to:$$\tau = N(1,1) \frac{h c^3}{8\pi k \nu_0^2} A_{1,1} \frac{1}{\sqrt{2 \pi} \sigma_\nu} \left(T_{ex}-T_{bg}\right)^{-1}$$ Equation A4 of Friesen et al 2009:$$N(1,1) = \frac{8\pi\nu^2}{c^2} \frac{g_1}{g_2} \frac{1}{A_{1,1}} \frac{1+\exp\left(-h\nu_0/k_B T_{ex}\right)}{1-\exp\left(-h \nu_0/k_B T_{ex}\right)} \int \tau(\nu) d\nu$$ Equation 98 of Mangum & Shirley 2015:$$N_{tot} = \frac{3 h}{8 \pi \mu^2 R_i} \frac{J_u(J_u+1)}{K^2} \frac{Q_{rot}}{g_J g_K g_I} \frac{\exp{E_u/k_B T_{ex}}}{\exp{h \nu/k_B T_{ex}} - 1} \left[\frac{\int T_R dv}{f\left(J_\nu(T_{ex})-J_\nu{T_B}\right) }\right]$$ From Scratch $$\tau_\nu = \int \alpha_\nu ds$$$$\alpha_\nu = \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left(1-\frac{g_l n_u}{g_u n_l}\right) \phi_\nu$$ Excitation temperature:$$T_{ex} \equiv \frac{h\nu_0/k_b}{\ln \frac{n_l g_u}{n_u g_l} } $$$\nu_0$ = rest frequency of the lineRearranges to:$$ \frac{n_l g_u}{n_u g_l} = \exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)$$ Boltzman distribution:$$ \frac{n_u}{n_l} = \frac{g_u}{g_l} \exp\left(\frac{-h \nu_0}{k_B T}\right)$$where T is a thermal equilibrium temperature Rearranges to:$$ 1-\frac{n_u g_l}{n_l g_u} = 1-\exp\left(\frac{-h \nu_0}{k_B T}\right)$$ Column Density $$N_u \equiv \int n_u ds$$$$N_l \equiv \int n_l ds$$ Starting to substitute previous equations into each other:$$\tau_\nu d\nu= \alpha_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left(1-\frac{g_l n_u}{g_u n_l}\right) \phi_\nu d\nu$$$$\frac{g_u}{g_l}N_l = N_u\exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)$$ First substitution is the Boltzmann distribution, with $T_{ex}$ for T$$\int \tau_\nu d\nu = \int \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right] \phi_\nu d\nu $$ Second is the $N_l$ - $N_u$ relation:$$\int \tau_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} A_{ul} N_u\left[\exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)\right] \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T}\right) \right] \int \phi_\nu d\nu $$ Then some simplification:$$\int \tau_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} A_{ul} N_u \left[ \exp\left(\frac{h \nu_0}{k_B T}\right) - 1 \right] \int \phi_\nu d\nu $$ $$A_{ul} = \frac{64\pi^4\nu_0^3}{3 h c^3} \left|\mu_{lu}\right|^2$$ Becomes, via some manipulation, equation 29 of Mangum & Shirley 2015:$$N_u = \frac{3 h c}{8\pi^3 \nu \left|\mu_{lu}\right|^2} \left[\exp\left(\frac{h\nu}{k_B T_{ex}}\right) -1\right]^{-1} \int \tau_\nu d\nu$$where I have used $T_{ex}$ instead of $T$ here because that is one of the substitutions invoked (quietly) in their derivation. There is some sleight-of-hand regarding assuming $N_l = n_l$ that essentially assumes $T_{ex}$ is constant along the line of sight, but that is fine.(Equation 30 is the same as this one, but with $dv$ instead of $d\nu$ units) Solve for tau again (because that's what's implemented in the code):$$\mathrm{"tau"} = \int \tau_\nu d\nu = N_u \frac{c^2 A_{ul}}{8\pi\nu_0^2} \left[\exp\left(\frac{h\nu}{k_B T_{ex}}\right) -1\right] $$ The key difference from Erik's derivation is that this is $N_u$, but he has defined $N_{(1,1)}= N_u + N_l$. So, we get $N_l$ the same way as above:$$N_l = \frac{8\pi\nu_0^2}{c^2} \frac{g_l}{g_u} A_{ul}^{-1} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right]^{-1} \int \tau d\nu$$ $$N_l = \frac{3 h c}{8 \pi^3 \nu \left|\mu_{lu}\right|^2} \frac{g_l}{g_u} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right]^{-1} \int \tau d\nu$$ Added together:$$N_u + N_l = \frac{3 h c}{8 \pi^3 \nu \left|\mu_{lu}\right|^2} \frac{\frac{g_l}{g_u} +\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)}{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} \int \tau d\nu$$ We can solve that back for tau, which is what Erik has done:$$\int \tau d\nu = (N_u + N_l) \frac{8 \pi^3 \nu \left|\mu_{lu}\right|^2}{3 h c} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {\frac{g_l}{g_u} +\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$$$=(N_u + N_l) \frac{g_u}{g_l}\frac{8 \pi^3 \nu \left|\mu_{lu}\right|^2}{3 h c} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$ $$=(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$ now identical to Erik's equation. This is actually a problem, because $N_u$ is related to $N_{tot}$ via the partition function, but there is some double-counting going on if we try to relate $N_{(1,1)}$ to $N_{tot}$ with the same equation. So, to reformulate the equations in pyspeckit using the appropriate values, we want to use both the partition function (calculated using $T_{kin}$) and $N_u$. Eqn 31: $$N_u = N_{tot} \frac{g_u}{Q_{rot}} \exp\left(\frac{-E_u}{k_B T_{kin}}\right)$$ is implemented correctly in pyspeckit: population_upperstate = lin_ntot * orthoparafrac * partition/(Z.sum())where ``partition`` is $$Z_i(\mathrm{para}) = (2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{kin}}\right]$$$$Z_i(\mathrm{ortho}) = 2(2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{kin}}\right]$$...so I'm assuming (haven't checked) that $E_u = h (B_0 J (J+1) + (C_0-B_0)J^2)$ Note that the leading "2" above cancels out in the Z/sum(Z), so it doesn't matter if it's right or not. I suspect, though, that the 2 belongs in front of both the para and ortho states, but it should be excluded for the J=0 case. An aside by Erik Rosolowsky (Note May 16, 2018: I believe this was incorporated into the above analysis)EWR: The above equation is problematic because it relates the total column density to the $(J,J)$ state which is the equivalent of the $N_{(1,1)}$ term. In the notation above $N_{(1,1)} = N_u + N_l$, so to get this right, you need to consider the inversion transition splitting on top of the total energy of the state so that $$ E_u = h (B_0 J (J+1) + (C_0-B_0)J^2) + \Delta E_{\mathrm{inv}}, g_u = 1 $$ and $$ E_l = h (B_0 J (J+1) + (C_0-B_0)J^2) - \Delta E_{\mathrm{inv}}, g_l = 1 $$ or, since the splitting is small compared to the rotational energy (1 K compared to > 20 K), then$$Z_J \approx 2 (2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{\mathrm{rot}}}\right]$$where the leading 2 accounts for the internal inversion states. Since this 2 appears in all the terms, it cancels out in the sum. Note that I have also changed the $T_{\mathrm{kin}}$ to $T_{\mathrm{rot}}$ since these two aren't the same and it is the latter which establishes the level populations.Returning to the above, I would then suggest $$N_{(J,J)} = N_{tot} \frac{Z_J}{\sum_j Z_j} $$ Is the treatment of optical depth correct? May 16, 2018: https://github.com/pyspeckit/pyspeckit/blob/725746f517e9bdcc22b83f4f9d6c9b8666e0a99e/pyspeckit/spectrum/models/ammonia.pyIn this version, we [compute the optical depth](https://github.com/pyspeckit/pyspeckit/blob/725746f517e9bdcc22b83f4f9d6c9b8666e0a99e/pyspeckit/spectrum/models/ammonia.pyL318) with the code:``` for kk,nuo in enumerate(nuoff): tauprof_ = (tau_dict[linename] * tau_wts[kk] * np.exp(-(xarr.value+nuo-lines[kk])**2 / (2.0*nuwidth[kk]**2))) if return_components: components.append(tauprof_) tauprof += tauprof_``` The total tau is normalized such that $\Sigma(\tau_{hf})_{hf} = \tau_{tot}$ for each line, i.e., the hyperfine $\tau$s sum to the tau value specified for the line.The question Nico raised is, should we be computing the synthetic spectrum as $1-e^{\Sigma(\tau_{hf,\nu})}$ or $\Sigma(1-e^{\tau_{hf,\nu}})$?The former is correct: we only have one optical depth per frequency bin. It doesn't matter what line the optical depth comes from.
###Code
# This is a test to show what happens if you add lines vs. computing a single optical depth per channel
from pyspeckit.spectrum.models.ammonia_constants import (line_names, freq_dict, aval_dict, ortho_dict,
voff_lines_dict, tau_wts_dict)
from astropy import constants
from astropy import units as u
import pylab as pl
linename = 'oneone'
xarr_v = (np.linspace(-25,25,1000)*u.km/u.s)
xarr = xarr_v.to(u.GHz, u.doppler_radio(freq_dict['oneone']*u.Hz))
tauprof = np.zeros(xarr.size)
true_prof = np.zeros(xarr.size)
width = 0.1
xoff_v = 0
ckms = constants.c.to(u.km/u.s).value
pl.figure(figsize=(12,12))
pl.clf()
for ii,tau_tot in enumerate((0.001, 0.1, 1, 10,)):
tau_dict = {'oneone':tau_tot}
voff_lines = np.array(voff_lines_dict[linename])
tau_wts = np.array(tau_wts_dict[linename])
lines = (1-voff_lines/ckms)*freq_dict[linename]/1e9
tau_wts = tau_wts / (tau_wts).sum()
nuwidth = np.abs(width/ckms*lines)
nuoff = xoff_v/ckms*lines
# tau array
tauprof = np.zeros(len(xarr))
for kk,nuo in enumerate(nuoff):
tauprof_ = (tau_dict[linename] * tau_wts[kk] *
np.exp(-(xarr.value+nuo-lines[kk])**2 /
(2.0*nuwidth[kk]**2)))
tauprof += tauprof_
true_prof += (1-np.exp(-tauprof_))
ax = pl.subplot(4,1,ii+1)
ax.plot(xarr_v, 1 - np.exp(-tauprof), label=str(tau_tot), zorder=20, linewidth=1)
ax.plot(xarr_v, true_prof, label=str(tau_tot), alpha=0.7, linewidth=2)
ax.plot(xarr_v, true_prof-(1-np.exp(-tauprof)) - tau_tot/20, linewidth=1)
pl.title(str(tau_tot))
###Output
_____no_output_____
###Markdown
Below are numerical checks for accuracy Some numerical checks: How bad was the use of Tkin instead of Tex in the $\tau$ equation? $$(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$
###Code
from astropy import units as u
from astropy import constants
freq = 23*u.GHz
def tau_wrong(tkin, tex):
return (1-np.exp(-constants.h * freq/(constants.k_B*tkin)))/(1+np.exp(-constants.h * freq/(constants.k_B*tex)))
def tau_right(tex):
return (1-np.exp(-constants.h * freq/(constants.k_B*tex)))/(1+np.exp(-constants.h * freq/(constants.k_B*tex)))
tkin = np.linspace(5,40,101)*u.K
tex = np.linspace(5,40,100)*u.K
grid = np.array([[tau_wrong(tk,tx)/tau_right(tx) for tx in tex] for tk in tkin])
%matplotlib inline
import pylab as pl
pl.imshow(grid, cmap='hot', extent=[5,40,5,40])
pl.xlabel("Tex")
pl.ylabel("Tkin")
pl.colorbar()
pl.contour(tex, tkin, grid, levels=[0.75,1,1/0.75], colors=['w','w','k'])
###Output
_____no_output_____
###Markdown
So the error could be 50%-700% over a somewhat reasonable range. That's bad, and it affects the temperature estimates. However, the effect on temperature estimates should be pretty small, since each line will be affected in the same way. The biggest effect will be on the column density. But, is this error at all balanced by the double-counting problem?Because we were using the partition function directly, it's not obvious. I was assuming that we were using the equation with $N_u$ as the leader, but we were using $N_u+N_l$. i.e., I was using this equation:$$\int \tau d\nu =(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$but with $N_u$ in place of $N_u + N_l$. The magnitude of the error can therefore be estimated by computing $(N_u+N_l)/N_u = 1 + \frac{N_l}{N_u}$. We can use the Boltzmann distribution to compute this error, then:$$ \frac{n_u}{n_l} = \frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T}\right)$$
###Code
def nunlnu_error(Tkin):
return 1+np.exp(-constants.h * freq / (constants.k_B * Tkin))
pl.plot(tkin.value, nunlnu_error(tkin))
###Output
_____no_output_____
###Markdown
So we were always off by a factor very close to 2. The *relative* values of $\tau$ should never have been affected by this issue. It will be more work to determine exactly how much the T_K and column estimates were affected. New work in May 2016: T_{rot} Comparing Trot and Tkin. If we start with the equation that governs level populations,$$N_u = N_{tot} \frac{g_u}{Q_{rot}} \exp\left(\frac{-E_u}{k_B T_{kin}}\right)$$we get $$N_u / N_l = \frac{g_u}{g_l} \exp\left(\frac{-E_u}{k_B T_{kin}} + \frac{E_l}{k_B T_{kin}}\right)$$where we really mean $T_{rot}$ instead of $T_{kin}$ here as long as we're talking about just two levels. This gives us a definition $$T_{rot} = \left(\frac{E_l-E_u}{k_B}\right)\left[\ln\left(\frac{N_u g_l}{N_l g_u}\right)\right]^{-1}$$which is the rotational temperature for a two-level system... which is just a $T_{ex}$, but governing non-radiatively-coupled levels. So, for example, if we want to know $T_{rot}$ for the 2-2 and 1-1 lines at $n=10^4$ and $T_{kin}=20$ K:
###Code
from pyradex import Radex
from astropy import constants, units as u
R = Radex(species='p-nh3', column=1e13, collider_densities={'pH2':1e4}, temperature=20)
tbl = R(collider_densities={'ph2': 1e4}, temperature=20, column=1e13)
tbl[8:10]
# we're comparing the upper states since these are the ones that are emitting photons
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
trot
tbl['Tex'][8:10].mean()
###Output
_____no_output_____
###Markdown
Pause here $T_{rot} = 60$ K for $T_{kin}=25$ K? That doesn't seem right. Is it possible RADEX is doing something funny with level populations? ERIK I SOLVED IT I had left out the $^{-1}$ in the code. Oops!
###Code
dT_oneone = -(constants.h * u.Quantity(tbl['frequency'][8], u.GHz)/constants.k_B).to(u.K)
print("delta-T for 1-1_upper - 1-1_lower: {0}".format(dT_oneone))
tex = (dT_oneone *
np.log((tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[8]) /
(tbl['lowerlevelpop'][8] * R.upperlevel_statisticalweight[8]))**-1
)
print("Excitation temperature computed is {0} and should be {1}".format(tex.to(u.K), tbl['Tex'][8]))
###Output
delta-T for 1-1_upper - 1-1_lower: -1.1371564340528206 K
Excitation temperature computed is 6.789341109004981 K and should be 6.789360524825584
###Markdown
Moving on: comparison to Swift et al 2005 Swift et al 2005 eqn A6$$T_R = T_K \left[ 1 + \frac{T_K}{T_0} \ln \left[1+0.6\exp\left( -15.7/T_K \right)\right] \right]^{-1}$$where $T_0=41.18$ K
###Code
T0=tbl['upperstateenergy'][9]-tbl['upperstateenergy'][8]
T0
def tr_swift(tk, T0=T0):
return tk*(1+tk/T0 * np.log(1+0.6*np.exp(-15.7/tk)))**-1
###Output
_____no_output_____
###Markdown
Note that the approximation "works" - gets something near 20 - for positive or negative values of T0 (but see below)
###Code
tr_swift(20, T0=-41.18)
tr_swift(20, T0=41.18)
tr_swift(20, T0=41.5)
def trot_radex(column=1e13, density=1e4, tkin=20):
tbl = R(collider_densities={'ph2': density}, temperature=tkin, column=column)
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
return trot
###Output
_____no_output_____
###Markdown
RADEX suggests that the *positive* T0 value is the correct one (the negative one appeared correct when incorrectly indexed statistical weights were being used)
###Code
trot_radex(tkin=20)
def tex_radex(column=1e13, density=1e4, tkin=20, lineno=8):
""" used in tests below """
tbl = R(collider_densities={'ph2': density}, temperature=tkin, column=column)
return tbl[lineno]['Tex']
%matplotlib inline
import pylab as pl
cols = np.logspace(12,15)
trots = [trot_radex(column=c).to(u.K).value for c in cols]
pl.semilogx(cols, trots)
pl.hlines(tr_swift(20), cols.min(), cols.max(), color='k')
pl.xlabel("Column")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
densities = np.logspace(3,9)
trots = [trot_radex(density=n).to(u.K).value for n in densities]
pl.semilogx(densities, trots)
pl.hlines(tr_swift(20), densities.min(), densities.max(), color='k')
pl.xlabel("Volume Density")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
###Output
_____no_output_____
###Markdown
This is the plot that really convinces me that the negative (black curve) value of T0 is the appropriate value to use for this approximation
###Code
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
pl.plot(temperatures, trots)
# wrong pl.plot(temperatures, tr_swift(temperatures, T0=-41.18), color='k')
pl.plot(temperatures, tr_swift(temperatures, T0=41.18), color='r')
pl.xlabel("Temperatures")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
temperatures = np.linspace(5,40,50)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
pl.plot(temperatures, np.abs(trots-tr_swift(temperatures, T0=41.18))/trots)
pl.xlabel("Temperatures")
pl.ylabel("$(T_{rot}(\mathrm{RADEX}) - T_{rot}(\mathrm{Swift}))/T_{rot}(\mathrm{RADEX})$")
###Output
_____no_output_____
###Markdown
Tests of cold_ammonia reproducing pyspeckit ammonia spectra
###Code
from pyspeckit.spectrum.models.tests import test_ammonia
from pyspeckit.spectrum.models import ammonia
###Output
_____no_output_____
###Markdown
Test 1: Use a constant excitatino temperature for all lines
###Code
tkin = 20*u.K
trot = trot_radex(tkin=tkin)
print(trot)
spc = test_ammonia.make_synthspec(lte=False, tkin=None, tex=6.66, trot=trot.value, lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
print("For Tkin={1} -> Trot={2}, pyspeckit's cold_ammonia fitter got:\n{0}".format(spc.specfit.parinfo, tkin, trot))
spc.specfit(fittype='cold_ammonia', guesses=[22.80, 6.6, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
bestfit_coldammonia_temperature = spc.specfit.parinfo[0]
print("The best fit cold ammonia temperature is {0} for an input T_rot={1}".format(bestfit_coldammonia_temperature, trot))
###Output
INFO: Left region selection unchanged. xminpix, xmaxpix: 0,502 [pyspeckit.spectrum.interactive]
The best fit cold ammonia temperature is Param #0 tkin0 = 19.8365 +/- 8.39537e-05 Range:[2.7315,inf) for an input T_rot=17.776914063182385 K
###Markdown
Test 2: Use a different (& appropriate) tex for each level in the input model spectrum If we use the exact tex for each line in the input model, in principle, the resulting fitted temperature should be more accurate. However, at present, it looks dramatically incorrect
###Code
tex11 = tex_radex(tkin=tkin, lineno=8)
tex22 = tex_radex(tkin=tkin, lineno=9)
print("tex11={0}, tex22={1} for tkin={2}, trot={3}".format(tex11,tex22,tkin,trot))
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
print("For Tkin={1} -> Trot={2}, pyspeckit's cold_ammonia fitter got:\n{0}"
.format(spc.specfit.parinfo, tkin, trot))
print("The best fit cold ammonia temperature is {0} for an input T_rot={1}"
.format(bestfit_coldammonia_temperature, trot))
###Output
INFO: Creating spectra [pyspeckit.spectrum.classes]
INFO: Concatenating data [pyspeckit.spectrum.classes]
INFO: Left region selection unchanged. xminpix, xmaxpix: 0,502 [pyspeckit.spectrum.interactive]
###Markdown
Test 3: compare cold_ammonia to "normal" ammonia model to see why they differ In a previous iteration of the ammonia model, there was a big (and incorrect) difference between the synthetic spectra from ammonia and cold_ammonia. This is now something of a regression test for that error, which turned out to be from yet another incorrect indexing of the degeneracy.
###Code
tkin = 20*u.K
trot = trot_radex(tkin=tkin)
dT0=41.18
print(tkin * (1 + (tkin.value/dT0)*np.log(1 + 0.6*np.exp(-15.7/tkin.value)))**-1)
print("tkin={0} trot={1} tex11={2} tex22={3}".format(tkin, trot, tex11, tex22))
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc_666 = test_ammonia.make_synthspec(lte=False, tkin=None,
tex=6.66,
trot=trot.value,
lines=['oneone','twotwo'])
# this one is guaranteed different because tex = trot
spc_cold = test_ammonia.make_synthspec_cold(tkin=tkin.value,
lines=['oneone','twotwo'])
spc[0].plotter(linewidth=3, alpha=0.5)
spc_666[0].plotter(axis=spc[0].plotter.axis, clear=False, color='r', linewidth=1, alpha=0.7)
spc_cold[0].plotter(axis=spc[0].plotter.axis, clear=False, color='b', linewidth=1, alpha=0.7)
###Output
Passing the drawstyle with the linestyle as a single string is deprecated since Matplotlib 3.1 and support will be removed in 3.3; please pass the drawstyle separately using the drawstyle keyword argument to Line2D or set_drawstyle() method (or ds/set_ds()).
###Markdown
The red and black look too different to me; they should differ only by a factor of (tex11-6.66)/6.66 or so. Instead, they differ by a factor of 5-6.
###Code
spc[0].data.max(), spc_666[0].data.max()
spc[1].plotter()
spc_666[1].plotter(axis=spc[1].plotter.axis, clear=False, color='r')
spc_cold[1].plotter(axis=spc[1].plotter.axis, clear=False, color='b')
###Output
_____no_output_____
###Markdown
RADEX analysis: T_rot vs T_kin vs T_ex
###Code
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
tex11s = np.array([tex_radex(tkin=t, lineno=8) for t in temperatures])
tex22s = np.array([tex_radex(tkin=t, lineno=9) for t in temperatures])
pl.plot(trots, tex11s)
pl.plot(trots, tex22s)
#pl.plot(tr_swift(temperatures), color='k')
pl.ylabel("$T_{ex}$")
pl.xlabel("$T_{rot} (2-2)/(1-1)$")
###Output
invalid value encountered in true_divide
overflow encountered in exp
The inputs to `brightness_temperature` have changed. Frequency is now the first input, and angular area is the second, optional input.
###Markdown
Apparently there are some discreteness problems but the ratio changes very little.
###Code
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
tex11s = np.array([tex_radex(tkin=t, lineno=8) for t in temperatures])
tex22s = np.array([tex_radex(tkin=t, lineno=9) for t in temperatures])
pl.plot(trots, tex11s/tex22s)
#pl.plot(tr_swift(temperatures), color='k')
pl.ylabel("$T_{ex} (2-2)/(1-1)$")
pl.xlabel("$T_{rot} (2-2)/(1-1)$")
###Output
_____no_output_____
###Markdown
run pyspeckit tests
###Code
from pyspeckit.spectrum.models.tests import test_ammonia
test_ammonia.test_ammonia_parlimits()
test_ammonia.test_ammonia_parlimits_fails()
test_ammonia.test_cold_ammonia()
test_ammonia.test_self_fit()
###Output
WARNING: No header given. Creating an empty one.
###Markdown
More extensive (& expensive) tests: recovered Tkin 1. Check the recovered temperature as a function of input temperature using RADEX to simulate "real" data
###Code
temperatures = np.array((10,15,20,25,30,35,40))
recovered_tkin = {}
recovered_column = {}
for tkin in temperatures:
tbl = R(collider_densities={'ph2': 1e4}, temperature=tkin, column=1e13)
tex11 = tbl['Tex'][8]
tex22 = tbl['Tex'][9]
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
recovered_tkin[tkin] = spc.specfit.parinfo['tkin0'].value
recovered_column[tkin] = spc.specfit.parinfo['ntot0'].value
pl.xlabel("$T_K$")
pl.ylabel("Fitted $T_K$ from cold_ammonia")
pl.plot(recovered_tkin.keys(), recovered_tkin.values(), 'o')
pl.plot(temperatures, temperatures)
pl.xlabel("$T_K$")
pl.ylabel("$|T_K-T_{fit}|/T_K$")
inp = np.array(list(recovered_tkin.keys()), dtype='float')
rslt = np.array(list(recovered_tkin.values()), dtype='float')
pl.plot(inp, np.abs(rslt-inp)/rslt, 'o')
###Output
_____no_output_____
###Markdown
2. Check the recovery as a function of column density
###Code
pl.xlabel("$N(NH_3)$")
pl.ylabel("Fitted $N(NH_3)$ from cold_ammonia")
pl.plot(recovered_column.keys(), recovered_column.values(), 'o')
pl.plot(temperatures, temperatures*0+13)
###Output
_____no_output_____ |
4_Optimization/Genetic_Algorithm/Simple Genetic Algorithm.ipynb | ###Markdown
Exploratory Computing with Python*Developed by David B. Steffelbauer and Mark Bakker* Notebook xx: Optimization with Genetic Algorithms IntroductionIn this notebook we will learn how to find minima in arbitrary functions. Finding minima is a field in mathematics that is called [Mathematical Optimization](https://en.wikipedia.org/wiki/Mathematical_optimization). Optimization of functions is of high importance in various fields ranging from finance to engineering. For example, airline companies have to schedule flights and airplanes in an optimal way to minimise costs, delivery companies have to find the shortest path between their customers and investors seek to minimise their risk while optimising their profit. All these problems are optimisation problems.Not only engineers, but also nature itself optimises. The principle behind the evolution of species is that individuals that are best adapted to the environment are more likely to survive. This is called the survival of the fittest. Increasing the adaption of species to their environment can be seen as optimising the fitness of a species. The whole information of one individual is decoded in its genes, which can be seen as input parameters of functions. Better adaption leads to higher chance of breeding with other individuals of the same species and passing your fit genes to the next generation. We will learn throughout this notebook how the rules of evolution can be translated into computer code and how the underlying mechanisms can be used to find minima in functions.Note: Make sure that all graphs that you produce include labels along the horizontal and vertical axes, a title and, if you are plotting multiple things in one graph, a legend.
###Code
import numpy as np
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
The Rastrigin function We will try to find an optimum in a very nasty function, the [Rastrigin Function](https://en.wikipedia.org/wiki/Rastrigin_function). The Rastrigin function is widespread used as a test problem for optimization algorithms. The function is a multi-modal function which means that it possesses many local optima and has one global optimum. It is defined in $D$ dimensions as follows\begin{align}f(\mathbf{x}) \ = \ a \cdot D + \sum_{i=1}^{D} \left(x_{i}^{2} - a \cdot \cos (2 \pi x_i) \right)\end{align}and has besides its many local minima a global minimum $\mathbf{x}^\ast$ at \begin{align}\mathbf{x}^\ast = \mathbf{0} \quad \text{with} \quad f(\mathbf{x}^\ast) = 0\end{align}This global minimum is the minimum we want to find.The gradient (first derrivative) results in \begin{align}\frac{\partial f(\mathbf{x})}{\partial x_i} = 2 x_i + 2 \pi a \sin \left(2 \pi x_i\right)\end{align}Setting the gradient to zero, we can see that the function has an infinite number of local optima at \begin{align} x_i = \frac{n_i}{2} \quad \text{with} \quad n_i \in \mathbb{Z} \quad \forall \ i \end{align}The Hessian calculation leads to\begin{equation} \frac{\partial^2 f(\mathbf{x})}{\partial x_i \partial x_j} = \begin{cases} \text{if } i = j: &2 + 4 \pi^2 a \cos \left( 2 \pi x_i\right) \\ \text{if } i\neq j: &\qquad \quad 0 \end{cases}\end{equation}The Hessian is a sparse, diagonal matrix. The determinant of the Hessian is positive if $n_i$ is even leading to a local minimum, whereas if $n_i$ is odd, a negative determinant and a local maximum results. Exercise 1. The Rastrigin functionThe first exercise is to implement the Rastrigin function in arbitrary dimensions. Set $a$ to 10. Use a vector of arbitrary length as input parameter of the function ($\rightarrow$ numpy package!). Compute the function values for the two dimensional points (a) (x,y)=(-0.5,0.5), (b) (x,y)=(0.0,0.5), (c) for the global optimum in two dimensions, (d) for a vector containing just ones of length 5 and, finally, (e) a vector of length 10 with the numbers one to ten. Print the results to the screen. Note: Use the numpy package for obtaining the numerical value for $\pi$ ([$\rightarrow\texttt{np.pi}$](https://docs.scipy.org/doc/numpy/reference/constants.htmlnumpy.pi)) and for the cosine function ([$\rightarrow\texttt{np.cos}$](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cos.html)).
###Code
def rastrigin(xvector, a=10):
D = len(xvector)
value = D * a
for x in xvector:
value += x ** 2 - a * np.cos(2 * np.pi * x)
return value
points = [[-0.5, 0.5],
[0.0, 0.5],
np.ones(5),
np.arange(10)+1]
for point in points:
string = ', '.join(f'{i: 2.2f}' for i in point)
print(f'Point ({string}) => {rastrigin(point)}')
###Output
Point (-0.50, 0.50) => 40.5
Point ( 0.00, 0.50) => 20.25
Point ( 1.00, 1.00, 1.00, 1.00, 1.00) => 5.0
Point ( 1.00, 2.00, 3.00, 4.00, 5.00, 6.00, 7.00, 8.00, 9.00, 10.00) => 385.0
###Markdown
Exercise 2. Computation of 2d-function values of the Rastrigin functionCompute the Rastrigin function for different x-y combinations at once on a two dimensional mesh. Use [$\texttt{np.linspace}$](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html) to produce a vector between -2 and +2 with 100 samples. Use [$\texttt{np.meshgrid}$](https://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html) to produce a two dimensional mesh $X$ and $Y$. Compute the two-dimensional function values $Z$ with your implementation of the rastrigin function (Hint: Give the function values as list to the rastrigin function $\texttt{Z = rastrigin([X,Y])}$). Compute the minimum of $Z$ and the $x$ and $y$ coordinates belonging to this minimum and print the minimum and the coordinate values to the screen. Why is the computed minimum different to the minimum of the analytical computation? What can you do to retrieve the real global optimum?
###Code
x = np.linspace(-2, 2, 100)
X, Y = np.meshgrid(x, x)
Z = rastrigin([X,Y])
f_min = np.min(Z)
index = np.unravel_index(np.argmin(Z), Z.shape)
coords = [x[i] for i in index]
print(f'Computed minimum min(Z) = {f_min:.2f} at point ({coords[0]:.3f}, {coords[1]:.3f})')
###Output
Computed minimum min(Z) = 0.16 at point (-0.020, -0.020)
###Markdown
Exercise 3. Plotting of 2d-Rastrigin functionPlot the two-dimensional Rastrigin function with matplotlib's [$\texttt{contourf}$](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.contourf.html) function. Add axis labels ($x$ and $y$) and a [colorbar](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.colorbar.html) with a colorbar label ($f(x,y)$). Compute the minimum at $x_0=0$ and $y_0=0$, add a title containing the minimum value and [annotate](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.annotate.html) the minimum in the contourplot with an arrow and a text with '$f(x_0, y_0) = $' and the computed minimum value ([$\rightarrow$Hint](https://matplotlib.org/users/annotations.html)).
###Code
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
plt.contourf(X, Y, Z)
cb = plt.colorbar()
plt.xlabel('x')
plt.ylabel('y')
cb.set_label(r'$f(x, y)$')
# Get minimum value in Rastrigin function and plot it
x0 = 0.0
y0 = 0.0
z0 = rastrigin([x0, y0])
plt.plot(0, 0, marker='x', color='k')
# plt.text(0, 0, f'$f(x_0, y_0) = {z0}$')
plt.title(f'$x_0 = 0.0, y_0=0.0 \\rightarrow f(x_0, y_0) = {z0}$')
plt.annotate(f'$f(x_0, y_0) = {z0}$',
xy=[x0, y0],
arrowprops=dict(color='r', shrink=0.05),
horizontalalignment='left',
verticalalignment='bottom',
xytext=(0.5, 0.5));
###Output
_____no_output_____
###Markdown
Exercise 4. Find the minimum by coincidence and show the convergenceProduce 100000 uniformly distributed two-dimensional random points between -2 and +2 ([$\rightarrow \texttt{np.random.uniform}$](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html)) and compute the Rastrigin function values. Show the convergence to the minimum, in other words, how the minimum value of the function gets smaller and smaller in dependency of the number of random numbers (Hint: use [$\texttt{np.minimum.accumulate}$](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.accumulate.html)). Plot the result as a [loglog](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.loglog.html) plot and print the found minimum value to the screen. Don't forget to add axis labels and a title.
###Code
R = np.random.uniform(low=-2.0, high=2.0, size=(2, 100000))
R_x = R[0, :]
R_y = R[1, :]
values = np.minimum.accumulate(rastrigin([R_x, R_y]))
fig, ax = plt.subplots(1, 1)
plt.loglog(values)
plt.title('Convergence plot')
plt.xlabel('# function evaluations')
plt.ylabel(r'$f(x,y)$')
plt.grid(True)
print('Monte Carlo minimum is = ', values[-1])
###Output
Monte Carlo minimum is = 0.01169304409703642
###Markdown
Genetic Algorithms[Genetic Algorithms (GA)](https://en.wikipedia.org/wiki/Genetic_algorithm) are widely used to obtain optimal solutions to countless problems in the water related field. GAs — first proposed by [John H. Holland](https://en.wikipedia.org/wiki/John_Henry_Holland) in 1975 — mimic the principles of evolution to solve optimization problems. GAs are population based. Hence, not just a single solution evolves over time, but rather they utilize the collective learning process of a population consisting of many single solutions. Each solution —called individual— consists of parameters —called genes— which represents a single search point in the parameter space. Descendants of individuals are produced by random either (i) through reproduction by exchanging genes with other individuals or (ii) by mutation, introducing randomly small changes in genes mimicking germ line mutation effects. Subsequently, the fitness of each individual is determined and fitter individuals are more likely to survive and reproduce, thus, giving their good genes to potentially more descendants and hence increasing the fitness of the whole population over time.We will learn step by step in the following exercises how to implement a GA and show that the theory of evolution can be used to find minima in functions. First, we will start by implementing the individual class. Second, we will initialize a whole population of individuals over the whole search space. Then we will learn how to combine the genes of multiple individuals to produce children, how to mutate the genes of the children to obtain new solutions, how to select the best solutions and how to put everything together resulting in a working implementation of a Genetic Algorithm. Exercise 5. Construct an Individual class objectImplement a Python class Individual which has two properties (1) a genome and (2) a fitness value. The genome is a vector of real numbers (genes), the fitness value is the function value obtained by the Rastrigin function with the genome as its input. The $\texttt{__init__}$ method should take as arguments the genome, the fitness value should be [NaN](https://en.wikipedia.org/wiki/NaN) (not a number) by default ([$\texttt{np.nan}$](https://docs.scipy.org/doc/numpy-1.13.0/user/misc.html)). Add an evaluate method to the class that computes the Rastrigin function only if the fitness is NaN. Override the $\texttt{__repr__}$ method to have a nice representation of the individual's genes and fitness when the print function is called. Use the points from exercise 1 to check if your implementation and the evaluate method is working (Hints on how to construct classes and override methods can be found in [Notebook 12](https://nbviewer.jupyter.org/github/mbakker7/exploratory_computing_with_python/blob/master/notebook12_oop/py_exploratory_comp_12_sol.ipynb)).
###Code
class Individual(object):
def __init__(self, genome=None):
self.genome = genome
self.fitness = np.nan
def evaluate(self):
if np.isnan(self.fitness):
self.fitness = rastrigin(self.genome)
def __repr__(self):
string = f'Individual with Genome: ['
string += ', '.join(f'{i: 2.2f}' for i in self.genome)
string += f'] => f(x): {self.fitness:=6.3f}'
return string
points = [[-0.5, 0.5],
[0.0, 0.5],
np.ones(5),
np.arange(10)+1]
for genome in points:
ind = Individual(genome=genome)
ind.evaluate()
print(ind)
###Output
Individual with Genome: [-0.50, 0.50] => f(x): 40.500
Individual with Genome: [ 0.00, 0.50] => f(x): 20.250
Individual with Genome: [ 1.00, 1.00, 1.00, 1.00, 1.00] => f(x): 5.000
Individual with Genome: [ 1.00, 2.00, 3.00, 4.00, 5.00, 6.00, 7.00, 8.00, 9.00, 10.00] => f(x): 385.000
###Markdown
IntitializationAn initialization procedure generates a population of individuals with hundreds or thousands of possible solutions. The number of individuals in a population is called the population size ($popsize$). Usually, the single individuals are produced randomly over the whole parameter space. Occasionally, the solutions may be "seeded" in areas where the optimal solutions is supposed to be found. Exercise 6. Initialize populationWrite an initialization function ($\texttt{initialize}$) that returns a population of $popsize$ individuals in a two dimensional ($D=2$) parameter space within $x_i \in [-2, 2]$. The population should be a $\texttt{list}$ of $\texttt{Individual}$ instances. Use $popsize=10$. Subsequently, evaluate each individual in the population and print the whole population on the screen by looping over all Individuals.
###Code
def initialize(popsize, dimensions=2, low=-2.0, high=+2.0):
population = []
for _ in range(popsize):
genes = np.random.uniform(low=low, high=high, size=2)
ind = Individual(genome=genes)
population.append(ind)
return population
popsize = 10
population = initialize(popsize)
print('The Population consists of ...')
for ind in population:
ind.evaluate()
print('->\t an', ind)
###Output
The Population consists of ...
-> an Individual with Genome: [ 0.61, 0.65] => f(x): 34.175
-> an Individual with Genome: [-1.67, 0.90] => f(x): 20.561
-> an Individual with Genome: [-1.15, -0.01] => f(x): 5.416
-> an Individual with Genome: [-0.54, 1.21] => f(x): 28.956
-> an Individual with Genome: [ 1.12, -0.85] => f(x): 8.900
-> an Individual with Genome: [-1.60, -0.86] => f(x): 24.766
-> an Individual with Genome: [ 1.51, -0.48] => f(x): 42.381
-> an Individual with Genome: [-1.67, -1.89] => f(x): 23.296
-> an Individual with Genome: [ 0.26, 1.14] => f(x): 15.225
-> an Individual with Genome: [-1.46, 1.24] => f(x): 32.917
###Markdown
Exercise 7. Plot the population in fitness landscapePlot the fitness landscape, which is the 2d Rastrigin function from Exercise 3, as a $\texttt{contourf}$ plot. Then plot the individuals of the population of exercise 6 in the fitness landscape as black points. Highlight the fittest individual (the individual with the smallest fitness value) in the fitness landscape by surrounding it with a red circle and print the fittest individual to the screen.
###Code
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
plt.contourf(X, Y, Z)
cb = plt.colorbar()
plt.xlabel('x')
plt.ylabel('y')
cb.set_label(r'$f(x, y)$')
fitness = np.inf
for ind in population:
plt.plot(*ind.genome, 'ko')
if ind.fitness < fitness:
fitness = ind.fitness
x, y = ind.genome
plt.plot(x, y, 'o', markerfacecolor='None', markeredgecolor='r', markersize=12);
print(f'Minimum of population at ({x:.2f},{y:.2f}) with f(x)={fitness}')
###Output
Minimum of population at (-1.15,-0.01) with f(x)=5.415906949314694
###Markdown
Recombination - the crossover operatorThe first operator in the evolution is the [recombination operator](https://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)). Recombination is responsible for large changes in the solution vectors. The operator produces $k$ new solutions by combining genes of different individuals of the population. In addition to the population, the operator also depends on an additional set of parameters, controlling the reproduction of the individuals such as the probability $p_r$ that the genes of two individuals are recombined. GAs have in common that they favor recombination over mutation, hence, $p_r$ is chosen to be high (e.g. $p_r = 0.8$). Exercise 8. Implement crossover for two individualsSingle point crossover represents a possibility to combine the genes of two individuals, the two parents. A point on both parents' genes is picked randomly (use [$\texttt{randint}$](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randint.html)), and designated a 'crossover point'. Genes to the right of that point are swapped between the two parent chromosomes. This results in two children, each carrying some genetic information from both parents. Implement a Python function called $\texttt{crossover}$ that takes as input two $\texttt{Individuals}$ (the parents) that performs a single point crossover between two Individuals and returns two new $\texttt{Individuals}$ (the children). Test the function on two individuals with 5 genes, one having only zeros (parent 1) and one having only ones (parent 2) in their genes. Apply crossover on the parents to produce the children, evaluate the children and print the parents and children to the screen to check, if the crossover operator is working the right way. In this example, the fitness of the children should be between the fitness values of the parents.
###Code
def crossover(parent1, parent2):
cr_point = np.random.randint(1, len(parent1.genome))
genome1 = np.concatenate([parent1.genome[:cr_point], parent2.genome[cr_point:]])
genome2 = np.concatenate([parent2.genome[:cr_point], parent1.genome[cr_point:]])
child1 = Individual(genome=genome1)
child2 = Individual(genome=genome2)
return child1, child2
p1 = Individual(genome=np.zeros(5))
p2 = Individual(genome=np.ones(5))
c1, c2 = crossover(p1, p2)
for i in [p1, p2, c1, c2]:
i.evaluate()
print('\nParents')
print(p1)
print(p2)
print('\nChildren')
print(c1)
print(c2)
###Output
Parents
Individual with Genome: [ 0.00, 0.00, 0.00, 0.00, 0.00] => f(x): 0.000
Individual with Genome: [ 1.00, 1.00, 1.00, 1.00, 1.00] => f(x): 5.000
Children
Individual with Genome: [ 0.00, 0.00, 1.00, 1.00, 1.00] => f(x): 3.000
Individual with Genome: [ 1.00, 1.00, 0.00, 0.00, 0.00] => f(x): 2.000
###Markdown
Exercise 9. Recombination of the whole population - let the mating season beginWrite a Python function called $\texttt{recombination}$ which takes as input parameters a population and the crossover probability $p_c$. Build random pairs within the population (all Individuals will find a partner) and iterate over these pairs. Decide with a uniform random number $u \in [0, 1]$ if the parents will crossover their genes to produce new children ($u \leq p_c$). If $u > p_c$, then two new children are produced with the same genes as the parents. Return all new children as a Python list. Test the $\texttt{recombination}$ function on the population of exercise 6 with a crossover probability of $p_c=0.8$ . Evaluate the children and print them to the screen.
###Code
def recombination(population, CXPB):
popsize = len(population)
combinations = np.arange(0, popsize)
np.random.shuffle(combinations)
combinations = combinations.reshape((int(popsize/2), 2))
children = []
for i, j in combinations:
parent1 = population[i]
parent2 = population[j]
if np.random.rand() < CXPB:
child1, child2 = crossover(parent1, parent2)
else:
child1 = Individual(genome=parent1.genome)
child2 = Individual(genome=parent2.genome)
children.append(child1)
children.append(child2)
return children
children = recombination(population, 0.8)
for child in children:
child.evaluate()
children
###Output
_____no_output_____
###Markdown
Mutation operatorThe second operator in the GA is the mutation operator. This operator produces small changes to the genes of an individual, hence, broadening the genetic variability of a population. GAs have in common that the mutation operator is applied with a low probability $p_m$ so that mutation works more as a "background operator" (usually $p_m \leq 0.2$).A gene in the genome of the children is taken at random and altered due to a certain random procedure with a certain probability. Exercise 10. Implement and apply mutation on the childrenImplement the mutation operator by writing a function $\texttt{mutation}$. The input parameters of the function are the children obtained through the recombination function, the mutation probability $p_m$ and a standard deviation $\sigma$. Iterate over all children and randomly decide with help of $p_m$, if the genome of the child should be mutated. If the child is mutated, choose a gene of the genome at random and add a Gaussian distributed random number with mean $mu=0.0$ and standard deviation $\sigma=0.25$ to the gene. The function should return the children. Apply the function on the children that were produced in exercise 9 to test it.
###Code
def mutation(children, MUTPB, sigma=0.25):
for child in children:
if np.random.rand() < MUTPB:
n = len(child.genome)
mut_int = np.random.randint(0, n)
child.genome[mut_int] += np.random.normal(0.0, scale=sigma)
return children
children = mutation(children, 0.2)
children
###Output
_____no_output_____
###Markdown
Selection operatorThe last operator in the GA is the selection operator. This operator chooses $popsize$ individualsfrom the through mutation and recombination altered population of parents and children based on their fitness. Fitter individuals are chosen with higher probability. Roulette wheel selectionRoulette wheel selection is also called [fitness proportionate selection](https://en.wikipedia.org/wiki/Fitness_proportionate_selection). Individuals with a higher fitness (lower fitness value) are chosen with higher probability. The probability to choose an individual is \begin{align} p_i \ = \ \frac{max(f) - f_i}{\sum_{j=1}^{N} (max(f) - f_j)}\end{align}max(f) is the maximum of all fitness values in the population and is necessary to scale the fitnesses for minimisation. Additionally, to make sure that the best individual of a population –the one with the lowest fitness value– is not lost during the evolution, one can make sure to always choose the best individual as part of the selected population. This is called elitism. Exercise 11. The selection operator.Write a Python function $\texttt{selection}$ that takes a population and the number of chosen individuals of this population as input parameters. Implement the roulette wheel selection algorithm with elitism as described above. Remove Individuals that have been chosen from the population so that they can't be chosen multiple times ([$\texttt{pop}$](https://docs.python.org/3.7/tutorial/datastructures.html) removes an element from a list by index). Don't forget to recompute the probabilities each time you remove an individual! The function should return individuals that are chosen from the population according to the probabilities build by their fitness value. (Hint: Use the cumulative sum of the probabilities and a random number between 0 and 1 to decide which individual of the population is chosen. With numpy's [argmin](https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html) function one can find the best individual in the fitnesses).
###Code
def selection(population, number=20, elitism=True):
# Produce vector containing all fitnesses:
fitness = []
for ind in population:
ind.evaluate()
fitness.append(ind.fitness)
if elitism:
# Find fittest individual
index = np.argmin(fitness)
best = population[index]
chosen = [best]
# Remove best individual from population and fitness
population.pop(index)
fitness.pop(index)
# Reduce the number of the chosen individuals by one, since one individual has already been chosen
number -= 1
else:
chosen = []
for n in range(number):
# Compute the scaled fitness values and the probabilities
scaled = np.max(fitness) - fitness
probabilities = np.cumsum(scaled / np.sum(scaled))
r = np.random.random()
index = (r >= probabilities).sum()
chosen.append(population[index])
# Remove best individual from population and fitness
population.pop(index)
fitness.pop(index)
return chosen
selection(population + children, number=4)
###Output
_____no_output_____
###Markdown
Exercise 12. A simple genetic algorithm - Putting it all togetherWrite a Python script where you utilize all operators that you have developed in the exercises before. First, initialize a population of individuals with $D=2$ between $x_i \in [-2, 2]$ with a population size of 100. Then make a for loop over 500 generations. In each generation, use the recombination function to produce children with a crossover probability of $p_c=0.8$. Mutate the children with a mutation probability of $p_m=0.2$ and $\sigma=0.25$. Use the roulette wheel selection function to select $popsize$ individuals from the parent population and children combined. Make sure that the fittest individual is always chosen (elitism). If the fitness of the fittest individual is lower than 0.0001, terminate the evolution by using the [$\texttt{break}$](https://docs.python.org/3.7/tutorial/controlflow.htmlbreak-and-continue-statements-and-else-clauses-on-loops) command. Track the fitness of the fittest individual in each generation and plot this value as a function of the generations in a semi-logarithmic plot ([$\texttt{semilogy}$](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.semilogy.html)). Don't forget axis labels and titles!Print the overall fittest individual to the screen.
###Code
popsize = 100
ngen = 500
CXPB = 0.8
MUTPB = 0.2
# Initialisation
population = []
for _ in range(popsize):
ind = Individual(genome=np.random.uniform(low=-2.0, high=2.0, size=2))
population.append(ind)
f_min = np.full((ngen, 1), np.nan)
for gen in range(ngen):
children = recombination(population, CXPB)
children = mutation(children, MUTPB)
population = selection(population + children, number=popsize, elitism=True)
# Extract fitness values for statistics
fits = [ind.fitness for ind in population]
genomes = np.asarray([ind.genome for ind in population])
f_min[gen] = np.min(fits)
if f_min[gen] < 0.0001:
break
# Plot fittest individual:
fitness = [individual.fitness for individual in population]
index = np.argmin(fitness)
print(population[index])
plt.semilogy(f_min)
plt.xlabel('generations')
plt.ylabel('min(f(x))')
plt.title('Convergence of the GA')
plt.grid(True)
###Output
_____no_output_____
###Markdown
Exercise 13. Escape local optima with GAsUse the code from above, but alter it a little bit. Intitialize the population in the right upper corner with $x_i \in [0.5, 2.0]$. Increase the number of generations to 2000. Additionally, track the mean $x$ and $y$ coordinate positions over all individuals of the population during the evolution. This is called the trajectory. Run the optimisation.Plot the contour plot of the Rastrigin function as in exercise 3. Highlight the area where the solutions were initially generated in the contour plot with two black dashed lines. Plot the trajectory of the evolution as black line. Can you see that the population jumps from local optimum to local optimum until it finds the global one?This is an advantage of [Metaheuristic algorithms](https://en.wikipedia.org/wiki/Metaheuristic). They are able to escape local optima in contrast to deterministic methods.
###Code
popsize = 100
ngen = 2000
CXPB = 0.8
MUTPB = 0.2
# Initialisation
population = []
for _ in range(popsize):
ind = Individual(genome=np.random.uniform(low=0.5, high=2.0, size=2))
population.append(ind)
xy_mean = np.full((ngen, 2), np.nan)#(np.empty((ngen, 2))).fill(np.nan)
f_min = np.full((ngen, 1), np.nan)
f_std = np.full((ngen, 1), np.nan)
for gen in range(ngen):
children = recombination(population, CXPB)
children = mutation(children, MUTPB)
population = selection(population + children, number=popsize, elitism=True)
# Extract fitness values for statistics
fits = [ind.fitness for ind in population]
genomes = np.asarray([ind.genome for ind in population])
xy_mean[gen, :] = np.mean(genomes, axis=0)
f_min[gen] = np.min(fits)
f_std[gen] = np.std(fits)
if f_min[gen] < 0.0001:
break
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
plt.contourf(X, Y, Z)
cb = plt.colorbar()
plt.xlabel('x')
plt.ylabel('y')
plt.plot([0.5, 2.0], [0.5, 0.5], 'k--')
plt.plot([0.5, 0.5], [0.5, 2.0], 'k--')
cb.set_label(r'$f(x, y)$')
plt.plot(xy_mean[:,0], xy_mean[:,1], 'k-');
###Output
_____no_output_____ |
lessons/ETLPipelines/7_datatypes_exercise/7_datatypes_exercise-solution.ipynb | ###Markdown
Data TypesWhen reading in a data set, pandas will try to guess the data type of each column like float, integer, datettime, bool, etc. In Pandas, strings are called "object" dtypes. However, Pandas does not always get this right. That was the issue with the World Bank projects data. Hence, the dtype was specified as a string:```df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)```Run the code cells below to read in the indicator and projects data. Then run the following code cell to see the dtypes of the indicator data frame.
###Code
# Run this code cell
import pandas as pd
# read in the population data and drop the final column
df_indicator = pd.read_csv('../data/population_data.csv', skiprows=4)
df_indicator.drop(['Unnamed: 62'], axis=1, inplace=True)
# read in the projects data set with all columns type string
df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)
df_projects.drop(['Unnamed: 56'], axis=1, inplace=True)
# Run this code cell
df_indicator.dtypes
###Output
_____no_output_____
###Markdown
These results look reasonable. Country Name, Country Code, Indicator Name and Indicator Code were all read in as strings. The year columns, which contain the population data, were read in as floats. Exercise 1Since the population indicator data was read in correctly, you can run calculations on the data. In this first exercise, sum the populations of the United States, Canada, and Mexico by year.
###Code
# TODO: Calculate the population sum by year for Canada,
# the United States, and Mexico.
#
keepcol = ['Country Name']
for i in range(1960, 2018, 1):
keepcol.append(str(i))
df_nafta = df_indicator[(df_indicator['Country Name'] == 'Canada') |
(df_indicator['Country Name'] == 'United States') |
(df_indicator['Country Name'] == 'Mexico')].iloc[:,]
df_nafta.sum(axis=0)[keepcol]
df_nafta
###Output
_____no_output_____
###Markdown
Exercise 2Now, run the code cell below to look at the dtypes for the projects data set. They should all be "object" types, ie strings, because that's what was specified in the code when reading in the csv file. As a reminder, this was the code:```df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)```
###Code
# Run this code cell
df_projects.dtypes
###Output
_____no_output_____
###Markdown
Many of these columns should be strings, so there's no problem; however, a few columns should be other data types. For example, `boardapprovaldate` should be a datettime and `totalamt` should be an integer. You'll learn about datetime formatting in the next part of the lesson. For this exercise, focus on the 'totalamt' and 'lendprojectcost' columns. Run the code cell below to see what that data looks like
###Code
# Run this code cell
df_projects[['totalamt', 'lendprojectcost']].head()
# Run this code cell to take the sum of the total amount column
df_projects['totalamt'].sum()
###Output
_____no_output_____
###Markdown
What just happened? Pandas treated the totalamts like strings. In Python, adding strings concatenates the strings together.There are a few ways to remedy this. When using pd.read_csv(), you could specify the column type for every column in the data set. The pd.read_csv() dtype option can accept a dictionary mapping each column name to its data type. You could also specify the `thousands` option with `thousands=','`. This specifies that thousands are separated by a comma in this data set. However, this data is somewhat messy, contains missing values, and has a lot of columns. It might be faster to read in the entire data set with string types and then convert individual columns as needed. For this next exercise, convert the `totalamt` column from a string to an integer type.
###Code
# TODO: Convert the totalamt column from a string to a float and save the results back into the totalamt column
# Step 1: Remove the commas from the 'totalamt' column
# HINT: https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.str.replace.html
# Step 2: Convert the 'totalamt' column from an object data type (ie string) to an integer data type.
# HINT: https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.to_numeric.html
df_projects['totalamt'] = pd.to_numeric(df_projects['totalamt'].str.replace(',',""))
###Output
_____no_output_____
###Markdown
Data TypesWhen reading in a data set, pandas will try to guess the data type of each column like float, integer, datettime, bool, etc. In Pandas, strings are called "object" dtypes. However, Pandas does not always get this right. That was the issue with the World Bank projects data. Hence, the dtype was specified as a string:```df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)```Run the code cells below to read in the indicator and projects data. Then run the following code cell to see the dtypes of the indicator data frame.
###Code
# Run this code cell
import pandas as pd
# read in the population data and drop the final column
df_indicator = pd.read_csv('../data/population_data.csv', skiprows=4)
df_indicator.drop(['Unnamed: 62'], axis=1, inplace=True)
# read in the projects data set with all columns type string
df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)
df_projects.drop(['Unnamed: 56'], axis=1, inplace=True)
# Run this code cell
df_indicator.dtypes
###Output
_____no_output_____
###Markdown
These results look reasonable. Country Name, Country Code, Indicator Name and Indicator Code were all read in as strings. The year columns, which contain the population data, were read in as floats. Exercise 1Since the population indicator data was read in correctly, you can run calculations on the data. In this first exercise, sum the populations of the United States, Canada, and Mexico by year.
###Code
# TODO: Calculate the population sum by year for Canada,
# the United States, and Mexico.
#
keepcol = ['Country Name']
for i in range(1960, 2018, 1):
keepcol.append(str(i))
df_nafta = df_indicator[(df_indicator['Country Name'] == 'Canada') |
(df_indicator['Country Name'] == 'United States') |
(df_indicator['Country Name'] == 'Mexico')].iloc[:,]
df_nafta.sum(axis=0)[keepcol]
###Output
_____no_output_____
###Markdown
Exercise 2Now, run the code cell below to look at the dtypes for the projects data set. They should all be "object" types, ie strings, because that's what was specified in the code when reading in the csv file. As a reminder, this was the code:```df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)```
###Code
# Run this code cell
df_projects.dtypes
###Output
_____no_output_____
###Markdown
Many of these columns should be strings, so there's no problem; however, a few columns should be other data types. For example, `boardapprovaldate` should be a datettime and `totalamt` should be an integer. You'll learn about datetime formatting in the next part of the lesson. For this exercise, focus on the 'totalamt' and 'lendprojectcost' columns. Run the code cell below to see what that data looks like
###Code
# Run this code cell
df_projects[['totalamt', 'lendprojectcost']].head()
# Run this code cell to take the sum of the total amount column
df_projects['totalamt'].sum()
###Output
_____no_output_____
###Markdown
What just happened? Pandas treated the totalamts like strings. In Python, adding strings concatenates the strings together.There are a few ways to remedy this. When using pd.read_csv(), you could specify the column type for every column in the data set. The pd.read_csv() dtype option can accept a dictionary mapping each column name to its data type. You could also specify the `thousands` option with `thousands=','`. This specifies that thousands are separated by a comma in this data set. However, this data is somewhat messy, contains missing values, and has a lot of columns. It might be faster to read in the entire data set with string types and then convert individual columns as needed. For this next exercise, convert the `totalamt` column from a string to an integer type.
###Code
# TODO: Convert the totalamt column from a string to a float and save the results back into the totalamt column
# Step 1: Remove the commas from the 'totalamt' column
# HINT: https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.str.replace.html
# Step 2: Convert the 'totalamt' column from an object data type (ie string) to an integer data type.
# HINT: https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.to_numeric.html
df_projects['totalamt'] = pd.to_numeric(df_projects['totalamt'].str.replace(',',""))
###Output
_____no_output_____
###Markdown
Table of Contents1 Data Types2 Exercise 13 Exercise 24 Conclusion Data TypesWhen reading in a data set, pandas will try to guess the data type of each column like float, integer, datettime, bool, etc. In Pandas, strings are called "object" dtypes. However, Pandas does not always get this right. That was the issue with the World Bank projects data. Hence, the dtype was specified as a string:```df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)```Run the code cells below to read in the indicator and projects data. Then run the following code cell to see the dtypes of the indicator data frame.
###Code
# Run this code cell
import pandas as pd
# read in the population data and drop the final column
df_indicator = pd.read_csv('../data/population_data.csv', skiprows=4)
df_indicator.drop(['Unnamed: 62'], axis=1, inplace=True)
# read in the projects data set with all columns type string
df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)
df_projects.drop(['Unnamed: 56'], axis=1, inplace=True)
# Run this code cell
df_indicator.dtypes
###Output
_____no_output_____
###Markdown
These results look reasonable. Country Name, Country Code, Indicator Name and Indicator Code were all read in as strings. The year columns, which contain the population data, were read in as floats. Exercise 1Since the population indicator data was read in correctly, you can run calculations on the data. In this first exercise, sum the populations of the United States, Canada, and Mexico by year.
###Code
# TODO: Calculate the population sum by year for Canada,
# the United States, and Mexico.
#
keepcol = ['Country Name']
for i in range(1960, 2018, 1):
keepcol.append(str(i))
df_nafta = df_indicator[(df_indicator['Country Name'] == 'Canada') |
(df_indicator['Country Name'] == 'United States') |
(df_indicator['Country Name'] == 'Mexico')].iloc[:,]
df_nafta.sum(axis=0)[keepcol]
###Output
_____no_output_____
###Markdown
Exercise 2Now, run the code cell below to look at the dtypes for the projects data set. They should all be "object" types, ie strings, because that's what was specified in the code when reading in the csv file. As a reminder, this was the code:```df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)```
###Code
# Run this code cell
df_projects.dtypes
###Output
_____no_output_____
###Markdown
Many of these columns should be strings, so there's no problem; however, a few columns should be other data types. For example, `boardapprovaldate` should be a datettime and `totalamt` should be an integer. You'll learn about datetime formatting in the next part of the lesson. For this exercise, focus on the 'totalamt' and 'lendprojectcost' columns. Run the code cell below to see what that data looks like
###Code
# Run this code cell
df_projects[['totalamt', 'lendprojectcost']].head()
# Run this code cell to take the sum of the total amount column
df_projects['totalamt'].sum()
###Output
_____no_output_____
###Markdown
What just happened? Pandas treated the totalamts like strings. In Python, adding strings concatenates the strings together.There are a few ways to remedy this. When using pd.read_csv(), you could specify the column type for every column in the data set. The pd.read_csv() dtype option can accept a dictionary mapping each column name to its data type. You could also specify the `thousands` option with `thousands=','`. This specifies that thousands are separated by a comma in this data set. However, this data is somewhat messy, contains missing values, and has a lot of columns. It might be faster to read in the entire data set with string types and then convert individual columns as needed. For this next exercise, convert the `totalamt` column from a string to an integer type.
###Code
# TODO: Convert the totalamt column from a string to a float and save the results back into the totalamt column
# Step 1: Remove the commas from the 'totalamt' column
# HINT: https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.str.replace.html
# Step 2: Convert the 'totalamt' column from an object data type (ie string) to an integer data type.
# HINT: https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.to_numeric.html
df_projects['totalamt'] = pd.to_numeric(df_projects['totalamt'].str.replace(',',""))
###Output
_____no_output_____
###Markdown
Data TypesWhen reading in a data set, pandas will try to guess the data type of each column like float, integer, datettime, bool, etc. In Pandas, strings are called "object" dtypes. However, Pandas does not always get this right. That was the issue with the World Bank projects data. Hence, the dtype was specified as a string:```df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)```Run the code cells below to read in the indicator and projects data. Then run the following code cell to see the dtypes of the indicator data frame.
###Code
# Run this code cell
import pandas as pd
# read in the population data and drop the final column
df_indicator = pd.read_csv('../data/population_data.csv', skiprows=4)
df_indicator.drop(['Unnamed: 62'], axis=1, inplace=True)
# read in the projects data set with all columns type string
df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)
df_projects.drop(['Unnamed: 56'], axis=1, inplace=True)
# Run this code cell
df_indicator.dtypes
###Output
_____no_output_____
###Markdown
These results look reasonable. Country Name, Country Code, Indicator Name and Indicator Code were all read in as strings. The year columns, which contain the population data, were read in as floats. Exercise 1Since the population indicator data was read in correctly, you can run calculations on the data. In this first exercise, sum the populations of the United States, Canada, and Mexico by year.
###Code
# TODO: Calculate the population sum by year for Canada,
# the United States, and Mexico.
#
keepcol = ['Country Name']
for i in range(1960, 2018, 1):
keepcol.append(str(i))
df_nafta = df_indicator[(df_indicator['Country Name'] == 'Canada') |
(df_indicator['Country Name'] == 'United States') |
(df_indicator['Country Name'] == 'Mexico')].iloc[:,]
df_nafta.sum(axis=0)[keepcol]
###Output
_____no_output_____
###Markdown
Exercise 2Now, run the code cell below to look at the dtypes for the projects data set. They should all be "object" types, ie strings, because that's what was specified in the code when reading in the csv file. As a reminder, this was the code:```df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)```
###Code
# Run this code cell
df_projects.dtypes
###Output
_____no_output_____
###Markdown
Many of these columns should be strings, so there's no problem; however, a few columns should be other data types. For example, `boardapprovaldate` should be a datettime and `totalamt` should be an integer. You'll learn about datetime formatting in the next part of the lesson. For this exercise, focus on the 'totalamt' and 'lendprojectcost' columns. Run the code cell below to see what that data looks like
###Code
# Run this code cell
df_projects[['totalamt', 'lendprojectcost']].head()
# Run this code cell to take the sum of the total amount column
df_projects['totalamt'].sum()
###Output
_____no_output_____
###Markdown
What just happened? Pandas treated the totalamts like strings. In Python, adding strings concatenates the strings together.There are a few ways to remedy this. When using pd.read_csv(), you could specify the column type for every column in the data set. The pd.read_csv() dtype option can accept a dictionary mapping each column name to its data type. You could also specify the `thousands` option with `thousands=','`. This specifies that thousands are separated by a comma in this data set. However, this data is somewhat messy, contains missing values, and has a lot of columns. It might be faster to read in the entire data set with string types and then convert individual columns as needed. For this next exercise, convert the `totalamt` column from a string to an integer type.
###Code
# TODO: Convert the totalamt column from a string to a float and save the results back into the totalamt column
# Step 1: Remove the commas from the 'totalamt' column
# HINT: https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.str.replace.html
# Step 2: Convert the 'totalamt' column from an object data type (ie string) to an integer data type.
# HINT: https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.to_numeric.html
df_projects['totalamt'] = pd.to_numeric(df_projects['totalamt'].str.replace(',',""))
###Output
_____no_output_____ |
rnn model-Copy1.ipynb | ###Markdown
1. making a RNN pretrained model
###Code
import os
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.layers import Bidirectional
from keras.preprocessing import sequence
from keras.layers import Dropout
from keras.models import model_from_json
from keras.models import load_model
import os
import random
import numpy as np
import pandas as pd
from tensorflow.python.keras.preprocessing import sequence
from tensorflow.python.keras.preprocessing import text
def load_imdb_sentiment_analysis_dataset(data_path, seed=123):
imdb_data_path = os.path.join(data_path, 'aclImdb')
print(imdb_data_path)
# Load the training data
train_texts = []
train_labels = []
for category in ['pos', 'neg']:
train_path = os.path.join(imdb_data_path, 'train', category)
for fname in sorted(os.listdir(train_path)):
if fname.endswith('.txt'):
with open(os.path.join(train_path, fname)) as f:
train_texts.append(f.read())
train_labels.append(0 if category == 'neg' else 1)
# Load the validation data.
#test_texts = []
#test_labels = []
for category in ['pos', 'neg']:
test_path = os.path.join(imdb_data_path, 'test', category)
for fname in sorted(os.listdir(test_path)):
if fname.endswith('.txt'):
with open(os.path.join(test_path, fname)) as f:
train_texts.append(f.read())
train_labels.append(0 if category == 'neg' else 1)
# Shuffle the training data and labels.
random.seed(seed)
random.shuffle(train_texts)
random.seed(seed)
random.shuffle(train_labels)
return (train_texts, np.array(train_labels) )
def sequence_vectorize(train_texts):
# Vectorization parameters
# Limit on the number of features. We use the top 20K features.
TOP_K = 20000
# Limit on the length of text sequences. Sequences longer than this
# will be truncated.
MAX_SEQUENCE_LENGTH = 500
# Create vocabulary with training texts.
tokenizer = text.Tokenizer(num_words=TOP_K)
tokenizer.fit_on_texts(train_texts)
# Vectorize training and validation texts.
x_train = tokenizer.texts_to_sequences(train_texts)
#x_val = tokenizer.texts_to_sequences(val_texts)
# Get max sequence length.
max_length = len(max(x_train, key=len))
if max_length > MAX_SEQUENCE_LENGTH:
max_length = MAX_SEQUENCE_LENGTH
# Fix sequence length to max value. Sequences shorter than the length are
# padded in the beginning and sequences longer are truncated
# at the beginning.
x_train = sequence.pad_sequences(x_train, maxlen=max_length)
#x_val = sequence.pad_sequences(x_val, maxlen=max_length)
return x_train, tokenizer
#load the data
# get the sequences
#define useful global variables
import pickle
TOP_K = 20000
MAX_SEQUENCE_LENGTH = 500
EMBEDDING_DIM = 100
data_path = '/home/rohit/Documents/Study/Projects/HACKATHON INNOVATE FOR IIT/aclImdb_v1'
(train_texts, train_labels) = load_imdb_sentiment_analysis_dataset(data_path)
print('train_text length ', len(train_texts))
print(train_labels.shape)
#sequences
x_train, tokenizer = sequence_vectorize(train_texts)
with open('tokenizer_on_IMDB.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
num_words = min(TOP_K, len(tokenizer.word_index) + 1)
#get the glove embeedings
glove_dir = '/home/rohit/Documents/Study/Projects/HACKATHON INNOVATE FOR IIT/'
embeddings_index = {}
with open(os.path.join(glove_dir, 'glove.6B.100d.txt')) as f:
for line in f:
word, coefs = line.split(maxsplit=1)
coefs = np.fromstring(coefs, 'f', sep=' ')
embeddings_index[word] = coefs
print('Found %s word vectors.' % len(embeddings_index))
embedding_matrix = np.zeros((num_words, EMBEDDING_DIM))
for word, i in tokenizer.word_index.items():
if i >= TOP_K:
continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
weight_matrix = embedding_matrix
max_words = MAX_SEQUENCE_LENGTH
def create_model_rnn(weight_matrix, max_words, EMBEDDING_DIM):
# create the model
model = Sequential()
model.add(Embedding(len(weight_matrix), EMBEDDING_DIM, weights=[weight_matrix], input_length=max_words, trainable=False))
model.add(Bidirectional(LSTM(128, dropout=0.4, recurrent_dropout=0.2)))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.50))
model.add(Dense(1, activation='sigmoid'))
return model
# Adam Optimiser
#some hyperparameters
model_rnn = create_model_rnn(weight_matrix = embedding_matrix, max_words= MAX_SEQUENCE_LENGTH, EMBEDDING_DIM = 100)
learning_rate=1e-3,
epochs=1000,
batch_size= 128,
blocks=2,
filters=64,
dropout_rate= 0.5,
embedding_dim=EMBEDDING_DIM,
kernel_size=3,
pool_size=3
num_classes = 2
num_features = num_words
loss = 'binary_crossentropy'
optimizer = keras.optimizers.Adam(lr=learning_rate)
model_rnn.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
model_rnn.fit(x_train,train_labels, epochs= 10, verbose = 1, batch_size = 256)
model_rnn.save("model_rnn_pretrained.h5")
###Output
_____no_output_____
###Markdown
training the pretrained model
###Code
#data
df = pd.read_csv('./Reviews Data/train.csv', index_col= 0)
#df_test = pd.read_csv('./Reviews Data/data_test1.csv', index_col = 0)
#df.head(10)
df_train_reviews = list(df.text[:9000])
df_train_labels = np.asarray(df.funny[:9000]).reshape(len(df_train_reviews), 1)
df_test_reviews = list(df.text[9000:])
df_test_labels = np.asarray(df.funny[9000:]).reshape(len(df_test_reviews),1)
from keras.models import load_model
model_pretrained = load_model('model_rnn_pretrained.h5')
from tensorflow.python.keras.preprocessing import sequence
from tensorflow.python.keras.preprocessing import text
def sequence_vectorize(train_texts, val_texts):
# Create vocabulary with training texts.
MAX_SEQUENCE_LENGTH = 500
tokenizer = text.Tokenizer(num_words=TOP_K)
tokenizer.fit_on_texts(train_texts)
x_train = tokenizer.texts_to_sequences(train_texts)
x_val = tokenizer.texts_to_sequences(val_texts)
# Get max sequence length.
max_length = len(max(x_train, key=len))
if max_length > MAX_SEQUENCE_LENGTH:
max_length = MAX_SEQUENCE_LENGTH
# Fix sequence length to max value. Sequences shorter than the length are
# padded in the beginning and sequences longer are truncated
# at the beginning.
x_train = sequence.pad_sequences(x_train, maxlen=max_length)
x_val = sequence.pad_sequences(x_val, maxlen=max_length)
return x_train, x_val
# preprocessed ready to train data
x_train,x_test = sequence_vectorize(df_train_reviews, df_test_reviews)
print(model_pretrained.evaluate(x_test, df_test_labels, batch_size = 64))
print(model_pretrained.evaluate(x_train, df_train_labels, batch_size = 128))
model_pretrained.fit(x_train, df_train_labels, batch_size = 128,epochs = 100,shuffle=True)
model_pretrained.save('model_rnn_pretrained_trained2.h5')
model_pretrained.save('model_rnn_pretrained_trained.h5')
model_pretrained.evaluate(x_test, df_test_labels, batch_size = 64)
model_pretrained.evaluate(x_train, df_train_labels, batch_size = 128)
###Output
9000/9000 [==============================] - 21s 2ms/step
|
notebooks/03-model.ipynb | ###Markdown
Load and Process Data
###Code
df = pd.read_csv('data/processed/model_data.csv')
###Output
_____no_output_____
###Markdown
One Hot Encode the season
###Code
features= [
'WINS_score',
'market_size',
'superteam_flg']
target = [
'team_value'
]
season_on_hot = pd.get_dummies(df['Season']).add_prefix('Season_')
X = pd.merge(season_on_hot, df[features], left_index=True, right_index=True)
Y = df['team_value']
X_train, X_test, y_train, y_test = train_test_split(X, Y)
xgb_model = xgb.XGBRegressor()
#brute force scan for all parameters, here are the tricks
#usually max_depth is 6,7,8
#learning rate is around 0.05, but small changes may make big diff
#tuning min_child_weight subsample colsample_bytree can have
#much fun of fighting against overfit
#n_estimators is how many round of boosting
#finally, ensemble xgboost with multiple seeds may reduce variance
parameters = {'nthread':[4], #when use hyperthread, xgboost may become slower
'learning_rate': [.001, 0.05, .01], #so called `eta` value
'max_depth': [2, 5, 10, 20],
'min_child_weight': [11],
'silent': [1],
'subsample': [.2, .5, 0.8],
'colsample_bytree': [.2, .5, 0.8],
'n_estimators': [5, 50, 500], #number of trees, change it to 1000 for better results
'missing':[-999],
'seed': [42]}
clf = GridSearchCV(xgb_model, parameters, n_jobs=6,
cv=5, verbose=2, refit=True)
%%time
clf.fit(X_train, y_train)
#trust your CV!
best_parameters, score, _ = max(clf.grid_scores_, key=lambda x: x[1])
print('Score:', score)
for param_name in sorted(best_parameters.keys()):
print("%s: %r" % (param_name, best_parameters[param_name]))
# test_probs = clf.predict_proba(test[features])[:,1]
# sample = pd.read_csv('../input/sample_submission.csv')
# sample.QuoteConversion_Flag = test_probs
# sample.to_csv("xgboost_best_parameter_submission.csv", index=False)
best_parameters
###Output
_____no_output_____ |
homeworks_basic/Lab3_DL/Lab3_DL_parts_4_and_5_optional.ipynb | ###Markdown
Lab 3: final challenges__Вам предлагается решить задачу классификации сигналов (вы уже встречались с ней во второй лабораторной работе) или задачу классификации изображений. Или обе ;)____Выполнение этих заданий не является обязательным, но позитивно повлияет на вашу итоговую оценку. Успехов!__ Part 4. HAR classification with raw data (2+ points)__Disclaimer__: Это опциональная часть задания. Здесь придется экспериментировать, подбирать оптимальную структуру сети для решения задачи и активно искать подскзаки в сети.Данное задание составлено на основе данного [поста](https://burakhimmetoglu.com/2017/08/22/time-series-classification-with-tensorflow/). С помощью вручную сгенерированных фичей и классических подходов задача распознования движений была решена с точностью 96%. Также будет полезным изучить [вот этот](https://github.com/healthDataScience/deep-learning-HAR), а так же [вот этот репозиторий](https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition), где к данной задаче рассматривается несколько подходов.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import math
import pylab
import warnings as w
import os
%matplotlib inline
import matplotlib
matplotlib.rcParams.update({'font.size':14})
###Output
_____no_output_____
###Markdown
Вернемся к задаче классификации движений на основе [данных](https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones) из репозитория UCI ([прямая ссылка на скачивание](https://archive.ics.uci.edu/ml/machine-learning-databases/00240/UCI%20HAR%20Dataset.zip)). В этот раз будем работать с исходными, а не предобработанными данными. Данные представляют собой сигналы с гироскопа и акселерометра, закрепленного на теле человека. Каждому семплу соотвествует 9 связанных временных рядов. В начале приведена визуализация данных на основе PCA над вручную сгенерированными признаками. Для отрисовки графиков (цвет и легенда) нам также понадобятся метки классов.
###Code
X_train_with_engineered_features = np.genfromtxt(os.path.join("UCI HAR Dataset", "train", "X_train.txt"))
y_train = np.genfromtxt(os.path.join("UCI HAR Dataset", "train", "y_train.txt"))
y_train_list = list(y_train)
X_unique = np.array([X_train_with_engineered_features[y_train_list.index(l)]
for l in sorted(list(set(y_train)))])
legend_labels = ["WALKING", "WALKING.UP", "WALKING.DOWN", "SITTING", "STANDING", "LAYING"]
colors_list = ['red', 'blue', 'green', 'orange', 'cyan', 'magenta']
mapped_colors = [colors_list[int(i)-1] for i in y_train]
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_with_engineered_features)
plt.figure(figsize=(15,10))
pylab.scatter(X_train_pca[:, 0], X_train_pca[:, 1],
c=mapped_colors)
plt.grid()
for idx, x in enumerate(pca.transform(X_unique)):
plt.scatter(x[0],
x[1],
c=colors_list[idx],
label=legend_labels[idx])
plt.xlabel('First principal component')
plt.ylabel('Second principal component')
plt.legend()
###Output
_____no_output_____
###Markdown
Предобработка данныхПредобработка сделана за нас автором [данного репозитория](https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition). Будьте осторожны с путями.
###Code
# Useful Constants
# Those are separate normalised input features for the neural network
INPUT_SIGNAL_TYPES = [
"body_acc_x_",
"body_acc_y_",
"body_acc_z_",
"body_gyro_x_",
"body_gyro_y_",
"body_gyro_z_",
"total_acc_x_",
"total_acc_y_",
"total_acc_z_"
]
# Output classes to learn how to classify
LABELS = [
"WALKING",
"WALKING_UPSTAIRS",
"WALKING_DOWNSTAIRS",
"SITTING",
"STANDING",
"LAYING"
]
DATA_PATH = "./"
DATASET_PATH = DATA_PATH + "UCI HAR Dataset/"
print("\n" + "Dataset is now located at: " + DATASET_PATH)
TRAIN = "train/"
TEST = "test/"
# Load "X" (the neural network's training and testing inputs)
def load_X(X_signals_paths):
X_signals = []
for signal_type_path in X_signals_paths:
file = open(signal_type_path, 'r')
# Read dataset from disk, dealing with text files' syntax
X_signals.append(
[np.array(serie, dtype=np.float32) for serie in [
row.replace(' ', ' ').strip().split(' ') for row in file
]]
)
file.close()
return np.transpose(np.array(X_signals), (1, 2, 0))
X_train_signals_paths = [
os.path.join(*[DATASET_PATH, TRAIN, "Inertial Signals/", signal+"train.txt"]) for signal in INPUT_SIGNAL_TYPES
]
X_test_signals_paths = [
os.path.join(*[DATASET_PATH, TEST, "Inertial Signals/", signal+"test.txt"]) for signal in INPUT_SIGNAL_TYPES
]
X_train = load_X(X_train_signals_paths)
X_test = load_X(X_test_signals_paths)
# Load "y" (the neural network's training and testing outputs)
def load_y(y_path):
file = open(y_path, 'r')
# Read dataset from disk, dealing with text file's syntax
y_ = np.array(
[elem for elem in [
row.replace(' ', ' ').strip().split(' ') for row in file
]],
dtype=np.int32
)
file.close()
# Substract 1 to each output class for friendly 0-based indexing
return y_ - 1
y_train_path = os.path.join(DATASET_PATH, TRAIN, "y_train.txt")
y_test_path = os.path.join(DATASET_PATH, TEST, "y_test.txt")
y_train = load_y(y_train_path)
y_test = load_y(y_test_path)
# Input Data
training_data_count = len(X_train) # 7352 training series (with 50% overlap between each serie)
test_data_count = len(X_test) # 2947 testing series
n_steps = len(X_train[0]) # 128 timesteps per series
n_input = len(X_train[0][0]) # 9 input parameters per timestep
# LSTM Neural Network's internal structure
n_hidden = 32 # Hidden layer num of features
n_classes = 6 # Total classes (should go up, or should go down)
# Some debugging info
print("Some useful info to get an insight on dataset's shape and normalisation:")
print("(X shape, y shape, every X's mean, every X's standard deviation)")
print(X_test.shape, y_test.shape, np.mean(X_test), np.std(X_test))
print("The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.")
###Output
_____no_output_____
###Markdown
Построение сети и эксперименты. (100% +)__Ваша задача - построить сеть, которая решит задачу классификации с точностью (`accuracy`) не менее 86%.__Разбалловка следующая:* $=$86% - 2 points* $>=$89% - 2.5 points* $>=$91% - 3 points__Warning!__ В сети существует несколько решений данной задачи с использованием различных фреймворков. При проверке это будет учитываться, так что свое решение нужно будет объяснить. Пожалуйста, не копируйте бездумно код, такие задания будут оценены 0 баллов. Если задача не решается - можете обратиться к заданию по классификации изображений. После выполнения задания заполните небольшой отчет об экспериментах вида "Я пробовал(а) ... подходы и получил(а) ... результаты. Наконец, после N+1 чашки кофе/бессонной ночи у меня получилось, и весь секрет был в ..."
###Code
# Your experiments here
###Output
_____no_output_____
###Markdown
Part 5. Dogs classification (2+ points)__Disclaimer__: Это опциональная часть задания. Здесь придется экспериментировать, подбирать оптимальную структуру сети для решения задачи и активно искать подскзаки в сети.Предлагаем вам решить задачу классификации пород собак. Вы можете обучить сеть с нуля или же воспользоваться методом fine-tuning'а. Полезная ссылка на [предобученные модели](https://pytorch.org/docs/stable/torchvision/models.html).Данные можно скачать [отсюда](https://www.dropbox.com/s/vgqpz2f1lolxmlv/data.zip?dl=0). Датасет представлен 50 классами пород собак, которые можно найти в папке train в соответствующих директориях. При сдаче данной части задания вместе с ноутбуком необходимо отправить .csv-файл с предсказаниями классов тестовой выборки в формате: , по одному объекту на строку. Ниже приведите код ваших экспериментов и короткий вывод по их результатам.Будут оцениваться качество классификации (accuracy) на тестовой выборке (2 балла) и проведенные эксперименты (1 балл).Разбалловка следующая:* $>=$93% - 2 points* $>=$84% - 1.5 points* $>=$70% - 0.75 points
###Code
# Your experiments here
###Output
_____no_output_____ |
Chapter03/ClassicalNeuralNetwork.ipynb | ###Markdown
To visualize a dense neural network, we'll draw a graph in code.
###Code
dense = nx.Graph()
inputs = {i: (0, i) for i in range(0, 10)}
activations = {i+100: (1, i) for i in range(0, 10)}
outputs= {i+1000: (2, i) for i in range(0, 2)}
all = {**inputs, **activations, **outputs}
#and now -- fully connected
for input in inputs:
for activation in activations:
dense.add_edge(input, activation)
for activation in activations:
for output in outputs:
dense.add_edge(activation, output)
nx.draw_networkx_nodes(dense, all, nodelist=all.keys(), node_color='b')
nx.draw_networkx_edges(dense, all)
plt.axis('off')
pass
import itertools
mnist = nx.Graph()
pixels = {i: (x, y) for i, (x, y) in enumerate(itertools.product(range(0, 28), range(0, 28)))}
activations = {i+1000: (x+30, y) for i, (x, y) in enumerate(itertools.product(range(0, 28), range(0, 28)))}
digits = {i+2000: (70, i) for i in range(0, 10)}
all = {**pixels, **activations, **digits}
for pixel in pixels:
for activation in activations:
mnist.add_edge(pixel, activation)
for activation in activations:
for digit in digits:
mnist.add_edge(activation, digit)
nx.draw_networkx_nodes(mnist, pixels, nodelist=pixels.keys(), node_color='sienna', node_size=8)
nx.draw_networkx_nodes(mnist, activations, nodelist=activations.keys(), node_color='skyblue', node_size=8)
nx.draw_networkx_nodes(mnist, digits, nodelist=digits.keys(), node_color='tan', node_size=8)
nx.draw_networkx_edges(mnist, all, width=0.1, alpha=0.5)
plt.axis('off')
pass
###Output
_____no_output_____
###Markdown
What makes a neural network able to learn interesting patterns is the concept of non-linearity. Very literally this means a function that does not generate a straight line.Very commonly two kinds of non linearity are used -- the relu, and the sigmoid. These non linear 'activation functions' will be used in the layers of our neural network.
###Code
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def relu(x):
return np.maximum(x, 0)
x = np.arange(-5., 5., 0.2)
plt.subplot(121)
plt.title('sigmoid')
plt.plot(x, sigmoid(x))
plt.subplot(122)
plt.title('relu')
plt.plot(x, relu(x))
pass
###Output
_____no_output_____
###Markdown
And on output, our output classes work best when we can think of them as probabilities that all add up to 1. This way we can tell the best matching prediction made by our neural network.
###Code
def softmax(x):
return np.exp(x) / np.sum(np.exp(x), axis=0)
sample_outputs = [1, 2, 5]
softmax(sample_outputs)
softmax(sample_outputs).sum()
###Output
_____no_output_____
###Markdown
Now with those concepts in mind, let's actually build a dense neural network to learn to recognize handwritten digits. Starting with the training and testing data.
###Code
import keras
from keras.datasets import mnist
from keras.layers import Input, Dense, Dropout, Flatten
from keras.models import Model
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train / np.max(x_train)
x_test = x_test / np.max(x_test)
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
###Output
_____no_output_____
###Markdown
And now, using the Keras functional API, stack together layers starting with the input, through Dense and Dropout learning layers, and then on to a final softmax output.Dropout is a device to help your network not memorize the input. The larger the number of parameters in your model, the larger the chance your model memorizes the input. Dropout is selective forgetting, turning off parts of the model randomly to avoid memorization.One thing to note here is Flatten. Because our images are two dimensional *x,y* pairs, and our output is one dimension -- a class 0-9, Flatten is needed to reduce the dimensions.
###Code
input_layer = Input(shape=x_train[0].shape)
dense_1 = Dense(32, activation='relu')(input_layer)
dropout_1 = Dropout(0.1)(dense_1)
dense_2 = Dense(32, activation='relu')(dropout_1)
dropout_2 = Dropout(0.1)(dense_2)
flat = Flatten()(dropout_2)
output_layer = Dense(10, activation='softmax')(flat)
model = Model(inputs=[input_layer], outputs=[output_layer])
model.summary()
###Output
_____no_output_____
###Markdown
With the model assembled, we compile it, which prepares the model for execution with a solver. The solver is a mathematical search engine, it works through the possible values of our trainable parameters. This seems like a lot of work -- and it is -- but multiple different solver algorithms exist which prevent the solver from looking at every possible number. The trick is the *loss* which is feedback to the solver as to whether it is getting better or worse. Picking the right loss function to work with your network is important, but can be approached as a cookbook.The loss functions are a bit hard to relate to, so we also have a metric. Accuracy is straightforward, telling you the percentage of the time you model predicted the right answer.
###Code
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=64,
epochs=8,
verbose=1,
validation_data=(x_test, y_test))
###Output
_____no_output_____
###Markdown
Parameters are numbers found by the solver, inside the model. Hyperparameters are numbers or settings we supply to create the model. Building a better model involves iteration and tuning these hyperparameters. Fortunately, we can use another layer of machine learning -- Grid Search -- to work through a set of hyperparameters for us. Grid search uses cross validation, splitting the training data up into folds for training and testing of each hyperparameter combination. After all hyperparameter variants are trained, the original test data is used to validate the final model.Machine learing often involves a lot of human waiting!
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from keras.wrappers.scikit_learn import KerasClassifier
from keras.models import Sequential
import time
def dense_model(units, dropout):
model = Sequential()
model.add(Dense(units, activation='relu', input_shape=(28, 28,)))
model.add(Dropout(dropout))
model.add(Dense(units, activation='relu'))
model.add(Dropout(dropout))
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
hyperparameters = {
'epochs': [1],
'batch_size': [64],
'units': [32, 64, 128],
'dropout': [0.1, 0.2, 0.4]
}
model = KerasClassifier(build_fn=dense_model, verbose=0)
start = time.clock()
grid = GridSearchCV(estimator=model, param_grid=hyperparameters, cv=6, verbose=4)
grid_result = grid.fit(x_train, y_train)
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
#the KerasClassifier comes back with the labels 0-9, so we use argmax
#to turn our one-hot encoding into 0-9 digit labels
y_true, y_pred = np.argmax(y_test, axis=1), grid.predict(x_test)
print()
print(classification_report(y_true, y_pred))
print()
print(time.clock() - start)
###Output
_____no_output_____ |
session2/2_isd_gauge_data.ipynb | ###Markdown
Integrated Surface Database (ISD) The Integrated Surface Database (ISD) consists of global hourly and synoptic observations compiled from numerous sources into a single common ASCII format and common data model. ISD was developed as a joint activity within Asheville's Federal Climate Complex. NCEI, with U.S. Air Force and Navy partners, began the effort in 1998 with the assistance of external funding from several sources. ISD integrates data from over 100 original data sources, including numerous data formats that were key-entered from paper forms during the 1950s-1970s time frame. The database includes over 35,000 stations worldwide, with some having data as far back as 1901, though the data show a substantial increase in volume in the 1940s and again in the early 1970s. Currently, there are over 14,000 "active" stations updated daily in the database. The total uncompressed data volume is around 600 gigabytes; however, it continues to grow as more data are added. ISD includes numerous parameters such as wind speed and direction, wind gust, temperature, dew point, cloud data, sea level pressure, altimeter setting, station pressure, present weather, visibility, precipitation amounts for various time periods, snow depth, and various other elements as observed by each station. First we download the isd-history file that contains a description about the stations, location and last data received:
###Code
!wget "https://www1.ncdc.noaa.gov/pub/data/noaa/isd-history.csv"
###Output
_____no_output_____
###Markdown
Have a look to this file using your command line skills (`head` command) Now we use pandas and cartopy to create a plot representing the position of all the available stations
###Code
%matplotlib inline
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import pandas as pd
def plot_isd_stations(date_str):
df = pd.read_csv("isd-history.csv", parse_dates=[9,10])
df.dropna(subset=['LAT', 'LON'], inplace=True)
df = df[(df['END'] >= date_str)]
plt.figure(figsize=(20,10))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([-180, 180, -90, 90])
ax.coastlines()
lats = df.LAT.values.tolist()
lons = df.LON.values.tolist()
plt.scatter(lons, lats,
color='blue', marker='o', s=4,
transform=ccrs.Geodetic())
plt.title('On the 2019-10-22 {} ISD stations have reported data for {}'.format(len(df.index), date_str))
plt.show()
plot_isd_stations('2020-01-01')
###Output
_____no_output_____
###Markdown
Can you modify the code above to see how stations are updated and how up-to-date is the information they contain by countries Also, modify the same code to zoom into the Australian region The following file contains the list of the codes for each country:
###Code
!wget "ftp://ftp.ncdc.noaa.gov/pub/data/noaa/country-list.txt"
###Output
_____no_output_____
###Markdown
Find Australia in that file and filter the previous isd-history to show Australian stations containing that code in the isd-history table. Use pandas dataframe and its filtering functionality. For example for the 083300 station corresponding to Talavera la Real in Spain. We can download the 2018 data doing:
###Code
!wget "ftp://ftp.ncdc.noaa.gov/pub/data/noaa/isd-lite/2018/083300-99999-2018.gz"
import numpy as np
from datetime import timedelta, datetime
def get_prec(station_code, date_from):
col_names = ["year", "month", "day", "hour", "air temp", "dew point", "mslp", "wind dir", "wind speed", "sky cov", "1h prec", "6h prec"]
df = pd.read_fwf('{}-2018.gz'.format(station_code), compression='gzip', header=None, names=col_names, parse_dates={'datetime': ['year', 'month', 'day', 'hour']})
df["sky cov"].replace(to_replace={-9999: np.nan}, inplace=True)
df["6h prec"].replace(to_replace={-9999: np.nan}, inplace=True)
df["1h prec"].replace(to_replace={-9999: np.nan}, inplace=True)
print(df.head())
plt.bar(np.arange(len(df.index)), df["1h prec"].values)
plt.show()
return np.nansum(df["1h prec"].values)
get_prec("083300-99999", datetime(2018, 1, 1))
###Output
datetime air temp dew point mslp wind dir wind speed \
0 2018-01-01 00:00:00 81 69 10350 260 15
1 2018-01-01 01:00:00 65 55 10354 140 10
2 2018-01-01 02:00:00 60 55 10346 150 10
3 2018-01-01 03:00:00 51 45 10351 220 5
4 2018-01-01 04:00:00 35 31 10351 50 15
sky cov 1h prec 6h prec
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
|
ai-platform-xgboost/xgboost-pipeline.ipynb | ###Markdown
============================================================================== \ Copyright 2020 Google LLC. This software is provided as-is, without warranty \ or representation for any use or purpose. Your use of it is subject to your \ agreement with Google. \ ============================================================================== Author: Elvin Zhu, Chanchal Chatterjee \ Email: [email protected] \
###Code
# !python3 -m pip install kfp
import os
import kfp
import yaml
import kfp.components as comp
import kfp.dsl as dsl
from typing import NamedTuple
from kfp.compiler import compiler
def data_preprocess(
bucket_name: str,
input_file: str,
target_column: str,
) -> NamedTuple('PreprocessOutput',
[
('x_train_name', str),
('x_test_name', str),
('y_train_name', str),
('y_test_name', str),
('n_classes', int),
]):
from collections import namedtuple
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import logging
logging.info("Loading {}".format(input_file))
dataset = pd.read_csv(input_file)
# drop unique id column which is not useful for ML
dataset.drop(['LOAN_SEQUENCE_NUMBER'], axis=1, inplace=True)
# Convert categorical columns into one-hot encodings
str_cols = [col for col in dataset.columns if dataset[col].dtype == 'object']
dataset = pd.get_dummies(dataset, columns=str_cols)
n_classes = dataset[target_column].nunique()
logging.info("No. of Classes: {}".format(n_classes))
# Split with a small test size so as to allow our model to train on more data
x_train, x_test, y_train, y_test = train_test_split(
dataset.drop(target_column, axis=1),
dataset[target_column],
test_size=0.1,
random_state=1,
shuffle=True,
stratify=dataset[target_column],
)
logging.info("x_train shape = {}".format(x_train.shape))
logging.info("x_test shape = {}".format(x_test.shape))
logging.info("y_train shape = {}".format(y_train.shape))
logging.info("y_test shape = {}".format(y_test.shape))
base_file_name = os.path.basename(input_file)
base_name, ext_name = os.path.splitext(base_file_name)
x_train_name = "{}_x_train{}".format(base_name, ext_name)
x_test_name = "{}_x_test{}".format(base_name, ext_name)
y_train_name = "{}_y_train{}".format(base_name, ext_name)
y_test_name = "{}_y_test{}".format(base_name, ext_name)
x_train_name = os.path.join("gs://", bucket_name, "data_split_xgb", x_train_name)
x_test_name = os.path.join("gs://", bucket_name, "data_split_xgb", x_test_name)
y_train_name = os.path.join("gs://", bucket_name, "data_split_xgb", y_train_name)
y_test_name = os.path.join("gs://", bucket_name, "data_split_xgb", y_test_name)
x_train.to_csv(x_train_name, index=False)
x_test.to_csv(x_test_name, index=False)
y_train.to_csv(y_train_name, index=False)
y_test.to_csv(y_test_name, index=False)
logging.info("x_train saved to {}".format(x_train_name))
logging.info("x_test saved to {}".format(x_test_name))
logging.info("y_train saved to {}".format(y_train_name))
logging.info("y_test saved to {}".format(y_test_name))
logging.info("finished")
PreprocessOutput = namedtuple('PreprocessOutput',
['x_train_name', 'x_test_name', 'y_train_name', 'y_test_name', 'n_classes'])
return PreprocessOutput(
x_train_name=x_train_name,
x_test_name=x_test_name,
y_train_name=y_train_name,
y_test_name=y_test_name,
n_classes=n_classes,
)
def hypertune(
job_name: str,
bucket_name: str,
job_folder_name: str,
region: str,
train_feature_path: str,
train_label_path: str,
val_feature_path: str,
val_label_path: str,
n_classes: int,
config_yaml: str = None,
) -> NamedTuple('TrainOutput',
[('response', str), ('job_name', str)]):
from collections import namedtuple
import subprocess
import logging
job_dir = 'gs://{}/{}/{}'.format(
bucket_name,
job_folder_name,
job_name,
)
job_name = job_name + "_hpt"
package_path = "/pipelines/component/trainer"
module_name = "trainer.train_hpt"
job_config = "/pipelines/component/config/config_hpt.yaml"
# if user input config yaml, then replace the default
if config_yaml is not None:
with open(job_config, 'w') as fout:
fout.write(config_yaml)
logging.info("JOB_NAME = {} ".format(job_name))
logging.info("JOB_DIR = {} ".format(job_dir))
logging.info("JOB_CONFIG = {} ".format(job_config))
response = subprocess.run([
"gcloud", "ai-platform", "jobs", "submit", "training",
job_name,
"--package-path", package_path,
"--module-name", module_name,
"--python-version", "3.7",
"--runtime-version", "2.2",
"--job-dir", job_dir,
"--region", region,
"--config", job_config,
"--",
"--train_feature_name", train_feature_path,
"--train_label_name", train_label_path,
"--val_feature_name", val_feature_path,
"--val_label_name", val_label_path,
"--no_classes", str(n_classes),
], stdout=subprocess.PIPE)
response = subprocess.run([
"gcloud", "ai-platform", "jobs", "describe", job_name,
], stdout=subprocess.PIPE)
TrainOutput = namedtuple('TrainOutput',['response', 'job_name'])
return TrainOutput(response=response.stdout.decode(), job_name=job_name)
def get_job_status(
response: str,
job_name: str,
time_out: int = 9000, # timeout after 2.5 hours by default
time_sleep: int = 60, # check status every minute by default
) -> NamedTuple('LRO_Output',
[('response', str), ('status', bool)]):
from collections import namedtuple
import subprocess
import time
import yaml
import logging
time0 = time.time()
status = False
while time.time() - time0 < time_out:
response = subprocess.run([
"gcloud", "ai-platform", "jobs", "describe", job_name,
], stdout=subprocess.PIPE)
response = response.stdout.decode()
response_dict = yaml.safe_load(response)
if 'state' in response_dict and response_dict.get('state') == 'SUCCEEDED':
status = True
break
else:
logging.info("Checking status ...")
logging.info(response)
time.sleep(time_sleep)
if not status:
raise TimeoutError("No successful job found. Timeout after {} seconds".format(time_out))
LRO_Output = namedtuple('LRO_Output',['response', 'status'])
return LRO_Output(response=response, status=status)
def get_hyperparameter(
project_id: str,
job_name: str,
status: bool,
) -> NamedTuple('Ghp_Output',
[('booster', str), ('max_depth', int), ('n_estimators', int)]):
from googleapiclient import discovery
import json
import logging
import pandas as pd
from collections import namedtuple
# Define the project id and the job id and format it for the api request
job_id = 'projects/{}/jobs/{}'.format(project_id, job_name)
# Build the service
ml = discovery.build('ml', 'v1', cache_discovery=False)
# Execute the request and pass in the job id
request = ml.projects().jobs().get(name=job_id).execute()
logging.info(json.dumps(request, indent=4))
# Print response
logging.info(json.dumps(request, indent=4))
trials = request['trainingOutput']['trials']
trials = pd.DataFrame(trials)
trials['hyperparameters.booster'] = trials['hyperparameters'].apply(lambda x: x['booster'])
trials['hyperparameters.max_depth'] = trials['hyperparameters'].apply(lambda x: x['max_depth'])
trials['hyperparameters.n_estimators'] = trials['hyperparameters'].apply(lambda x: x['n_estimators'])
trials['finalMetric.trainingStep'] = trials['finalMetric'].apply(lambda x: x['trainingStep'])
trials['finalMetric.objectiveValue'] = trials['finalMetric'].apply(lambda x: x['objectiveValue'])
trials = trials.sort_values(['finalMetric.objectiveValue'], ascending=False)
booster=trials['hyperparameters'][0]['booster']
max_depth=trials['hyperparameters'][0]['max_depth']
n_estimators=trials['hyperparameters'][0]['n_estimators']
Ghp_Output = namedtuple('Ghp_Output',['booster', 'max_depth', 'n_estimators'])
return Ghp_Output(booster=booster, max_depth=max_depth, n_estimators=n_estimators )
def train(
job_name: str,
bucket_name: str,
job_folder_name: str,
region: str,
train_feature_path: str,
train_label_path: str,
n_classes: int,
n_estimators: int,
max_depth: int,
booster: str,
config_yaml: str = None,
) -> NamedTuple('TrainOutput',
[('response', str), ('job_name', str)]):
from collections import namedtuple
import subprocess
import logging
job_dir = 'gs://{}/{}/{}'.format(
bucket_name,
job_folder_name,
job_name,
)
package_path = "/pipelines/component/trainer"
module_name = "trainer.train"
job_config = "/pipelines/component/config/config.yaml"
logging.info("JOB_NAME = {} ".format(job_name))
logging.info("JOB_DIR = {} ".format(job_dir))
logging.info("JOB_CONFIG = {} ".format(job_config))
# if user input config yaml, then replace the default
if config_yaml is not None:
with open(job_config, 'w') as fout:
fout.write(config_yaml)
response = subprocess.run([
"gcloud", "ai-platform", "jobs", "submit", "training",
job_name,
"--job-dir", job_dir,
"--package-path", package_path,
"--module-name", module_name,
"--region", region,
"--python-version", "3.7",
"--runtime-version", "2.2",
"--config", job_config,
"--",
"--train_feature_name", train_feature_path,
"--train_label_name", train_label_path,
"--no_classes", str(n_classes),
"--n_estimators", str(n_estimators),
"--max_depth", str(max_depth),
"--booster", booster,
], stdout=subprocess.PIPE)
response = subprocess.run([
"gcloud", "ai-platform", "jobs", "describe", job_name,
], stdout=subprocess.PIPE)
TrainOutput = namedtuple('TrainOutput',['response', 'job_name'])
return TrainOutput(response=response.stdout.decode(), job_name=job_name)
def deploy(
bucket_name: str,
job_folder_name: str,
job_name: str,
model_name: str,
model_version: str,
region:str,
model_framework:str,
model_description: str,
status: bool,
):
from collections import namedtuple
import subprocess
import logging
import re
latest_model_dir = "gs://{}/{}/{}".format(bucket_name, job_folder_name, job_name)
# Check if model exists:
response = subprocess.run([
"gcloud", "ai-platform", "models", "list",
"--region", "global",
], stdout=subprocess.PIPE)
response = response.stdout.decode().split("\n")[1:]
list_of_models = [re.sub(" +", " ", x).split(" ")[0] for x in response]
# create model if not exists
if not model_name in list_of_models:
# create model
response = subprocess.run([
"gcloud", "ai-platform", "models", "create",
model_name,
"--region", region,
"--enable-logging",
], stdout=subprocess.PIPE)
# create model version
response = subprocess.run([
"gcloud","beta", "ai-platform", "versions", "create",
model_version,
"--model", model_name,
"--origin", latest_model_dir,
"--region", "global",
"--python-version", "3.7",
"--runtime-version", "2.2",
"--framework", model_framework,
"--description", model_description,
], stdout=subprocess.PIPE)
DeployOutput = namedtuple('DeployOutput',['response'])
return DeployOutput(response=response.stdout.decode())
###Output
_____no_output_____
###Markdown
Compile python functions to components
###Code
component_dir = "./components"
base_image = "gcr.io/deeplearning-platform-release/tf2-gpu.2-1"
yaml_name = '{}/preprocess.yaml'.format(component_dir)
preprocess_op = comp.func_to_container_op(
data_preprocess,
output_component_file=yaml_name,
base_image=base_image)
base_image = "gcr.io/img-seg-3d/trainer:v1"
yaml_name = '{}/train_hpt.yaml'.format(component_dir)
hypertune_op = comp.func_to_container_op(
hypertune,
output_component_file=yaml_name,
base_image=base_image)
base_image = "gcr.io/deeplearning-platform-release/tf2-gpu.2-1"
yaml_name = '{}/lro.yaml'.format(component_dir)
lro_op = comp.func_to_container_op(
get_job_status,
output_component_file=yaml_name,
base_image=base_image)
base_image = "gcr.io/deeplearning-platform-release/tf2-gpu.2-1"
yaml_name = '{}/ghp.yaml'.format(component_dir)
ghp_op = comp.func_to_container_op(
get_hyperparameter,
output_component_file=yaml_name,
base_image=base_image)
base_image = "gcr.io/img-seg-3d/trainer:v1"
yaml_name = '{}/train.yaml'.format(component_dir)
train_op = comp.func_to_container_op(
train,
output_component_file=yaml_name,
base_image=base_image)
base_image = "gcr.io/deeplearning-platform-release/tf2-gpu.2-1"
yaml_name = '{}/deploy.yaml'.format(component_dir)
deploy_op = comp.func_to_container_op(
deploy,
output_component_file=yaml_name,
base_image=base_image)
###Output
_____no_output_____
###Markdown
Compile KFP pipeline
###Code
@dsl.pipeline(
name='generic prediction pipeline',
description='A pipeline that performs generic seismic image segmentation.'
)
def train_pipeline(
job_name: str,
project_id: str,
region: str,
user_name: str,
bucket_name: str,
input_file: str,
job_folder_name: str,
target_column: str,
deployed_model_name: str,
deployed_model_version: str,
deployed_model_description: str,
config_yaml_hpt: str,
config_yaml: str,
):
preprocess_task = preprocess_op(
bucket_name = bucket_name,
input_file = input_file,
target_column = target_column,
)
hpt_task = hypertune_op(
job_name = job_name,
bucket_name = bucket_name,
job_folder_name = job_folder_name,
region = region,
train_feature_path = preprocess_task.outputs['x_train_name'],
train_label_path = preprocess_task.outputs['y_train_name'],
val_feature_path = preprocess_task.outputs['x_test_name'],
val_label_path = preprocess_task.outputs['y_test_name'],
n_classes = preprocess_task.outputs['n_classes'],
config_yaml = config_yaml_hpt,
)
lro_task = lro_op(
response = hpt_task.outputs['response'],
job_name = hpt_task.outputs['job_name'],
)
ghp_task = ghp_op(
project_id = project_id,
job_name = hpt_task.outputs['job_name'],
status = lro_task.outputs['status'],
)
train_task = train_op(
job_name = job_name,
bucket_name = bucket_name,
job_folder_name = job_folder_name,
region = region,
train_feature_path = preprocess_task.outputs['x_train_name'],
train_label_path = preprocess_task.outputs['y_train_name'],
n_classes = preprocess_task.outputs['n_classes'],
n_estimators = ghp_task.outputs['n_estimators'],
max_depth = ghp_task.outputs['max_depth'],
booster = ghp_task.outputs['booster'],
config_yaml = config_yaml,
)
lro_task_2 = lro_op(
response = train_task.outputs['response'],
job_name = train_task.outputs['job_name'],
)
deploy_task = deploy_op(
status = lro_task_2.outputs['status'],
bucket_name = bucket_name,
job_folder_name = job_folder_name,
job_name = train_task.outputs['job_name'],
region = 'global',
model_framework = 'XGBOOST',
model_name = deployed_model_name,
model_version = deployed_model_version,
model_description = deployed_model_description,
)
pipeline_pkg_path="./train_pipeline.tar.gz"
compiler.Compiler().compile(train_pipeline, package_path=pipeline_pkg_path)
###Output
_____no_output_____
###Markdown
Run KFP pipeline on AI Platform hosted Kubernetes cluster
###Code
# ============== Uncomment to run the pipeline ==============
# from datetime import datetime
# from pytz import timezone
# my_timezone = 'US/Pacific'
# # Coping config file to worker VM
# config_hpt = "./config/config_hpt.yaml"
# with open(config_hpt, 'r') as fin:
# config_yaml_hpt = fin.read()
# config = "./config/config.yaml"
# with open(config, 'r') as fin:
# config_yaml = fin.read()
# # Define pipeline input
# params = {
# "job_name": 'xgb_train_elvinzhu_{}'.format(
# datetime.now(timezone(my_timezone)).strftime("%m%d%y_%H%M")
# ),
# "project_id": 'img-seg-3d',
# "region": 'us-central1',
# "user_name": 'elvinzhu',
# "job_folder_name": 'xgb_train_job',
# "bucket_name": 'tuti_job',
# "input_file": 'gs://tuti_asset/datasets/mortgage_structured.csv',
# "target_column": 'TARGET',
# "deployed_model_name": "kfp_xgb_model",
# "deployed_model_version": "kfp_xgb_bst_v0_2",
# "deployed_model_description": "best_xgb_hpt",
# "config_yaml_hpt": config_yaml_hpt,
# "config_yaml": config_yaml,
# }
# kfp_host_name = '6ff530db99970db2-dot-us-central2.pipelines.googleusercontent.com'
# kfp_exp_name = 'xgboost_ai_platform'
# kfp_run_name = 'demo_xgboost'
# client = kfp.Client(host=kfp_host_name)
# # Create Experiment GROUP
# exp = client.create_experiment(name = kfp_exp_name)
# # Create Experiment RUN
# run = client.run_pipeline(exp.id, kfp_run_name, pipeline_pkg_path, params=params)
###Output
_____no_output_____ |
examples/signature_method.ipynb | ###Markdown
The Signature Method with SktimeThe ‘signature method’ refers to a collection of feature extraction techniques for multimodal sequential data, derived from the theory of controlled differential equations. In recent years, a large number of modifications have been suggested to the signature method so as to improve some aspect of it. In the paper ["A Generalised Signature Method for Time-Series"](https://arxiv.org/abs/2006.00873) [1] the authors collated the vast majority of these modifications into a single document and ran a large hyper-parameter study over the multivariate UEA datasets to build a generic signature algorithm that is expected to work well on a wide range of datasets. We implement the best practice results from this study as the default starting values for our hyperparameters in the `SignatureClassifier` module. The Path SignatureAt the heart of the signature method is the so-called "signature transform".A path $X$ of finite length in $\textit{d}$ dimensions can be described by the mapping $X:[a, b]\rightarrow\mathbb{R}$ $\!\!^d$, or in terms of co-ordinates $X=(X^1_t, X^2_t, ...,X^d_t)$, where each coordinate $X^i_t$ is real-valued and parameterised by $t\in[a,b]$.The **signature transform** $S$ of a path $X$ is defined as an infinite sequence of values:\begin{equation} S(X)_{a, b} = (1, S(X)_{a, b}^1, S(X)_{a, b}^2, ..., S(X)_{a, b}^d, S(X)_{a,b}^{1, 1}, S(X)_{a,b}^{1, 2}, ...), \label{eq:path_signature}\end{equation}where each term is a $k$-fold iterated integral of $X$ with multi-index $i_1,...,i_k$:\begin{equation} S(X)_{a, b}^{i_1,...,i_k} = \int_{a<t_k<b}...\int_{a<t_1<t_2} \mathrm{d}X_{t_1}^{i_1}...\mathrm{d}X_{t_k}^{i_k}. \label{eq:sig_moments}\end{equation}This defines a graded sequence of numbers associated with a path which is known to characterise it up to a generalised form of reparameterisation [2]. One can think of the signature as a collection of summary statistics that determine a path (almost) uniquely. Furthermore, any continuous function on the path $X$ can be approximated arbitrarily well as a linear function on its signature [3]; the signature unravels the non-linearities on functions on the space of unparameterised paths. A VisualisationTo give an idea of what the signature terms represent physically, we consider a patient in an ICU where we are tracking their systolic blood pressure (SBP) and heart rate (HR) changing in time. This can be represented as a path in $\mathbb{R}^3$ (assuming time is included as a channel).The plot above sketches two scenarios of how such a path might look. We are assuming here an implicit time dimension for each plot such that the path is traversed from left to right along the blue line. Depth 1:The signature terms to depth 1 are simply the changes of each of the variables over the interval, in the image this is the $\Delta \text{HR}$ and $\Delta \text{SBP}$ terms. Note that these values are the same in each case. Depth 2: The second level gives us the signed areas (the shaded orange regions), where the orientation of the left most plot is such that the negatively signed area is produced whereas the second gives the positive value, and thus, at order 2 in the signature we now have sufficient information to discriminate between these two situations where in the first rise in heart rate occurs before (or at least, initially faster than) the rise in blood pressure, and vice versa. Depth > 2: Depths larger than 2 become more difficult to visualise graphically, however the idea is similar to that of the depth 2 case where we saw that the signature produced information on whther the increase in HR or SBP appeared to be happening first, along with some numerical quantification of how much this was happening. At higher orders the signature is doing something similar, but now with three events, rather than two. The signature picks out structural information regarding the order in which events occur. The Signature in Time-Series AnalysisThe signature is a natural tool to apply in problems related to time-series analysis. As described above it can convert multi-dimensional time-series data into static features that represent information about the sequential nature of the time-series, that can be fed through a standard machine learning model. A simplistic view of how this works is as follows:\begin{equation} \text{Model}(\text{Signature}(\text{Sequential data}))) = \text{Predictions}\end{equation} Considered Signature VariationsAgain, following the work in [1] we group the variations on the siganture method conceptually into:- **Augmentations** - Transformation of an input sequence or time series into one or more new sequences, so that the signature will return different information about the path.- **Windows** - Windowing operations, so that the signature can act with some locality.- **Transforms** - The choice between the signature or the logsignature transformation.- **Rescalings** - Method of signature rescaling.This is neatly represented in the following graphic, where $\phi$ represents the augmentation, $W^{i, j}$ the windowing operation, $S^N$ the signature, and $\rho_{\text{pre}/\text{post}}$ the rescalig method. Please refer to the full paper for a more comprehensive exploration into what each of these groupings means. The Sktime ModulesWe now give an introduction to the classification and transformation modules included in th sktime interface, along with an example to show how to perform efficient hyperparameter optimisation that was found to give good results in [1].
###Code
# Some additional imports we will use
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sktime.datasets import load_gunpoint
# Load an example dataset
train_x, train_y = load_gunpoint(split="train", return_X_y=True)
test_x, test_y = load_gunpoint(split="test", return_X_y=True)
###Output
_____no_output_____
###Markdown
OverviewWe provide the following:- **sktime.transformers.panel.signature_based.SignatureTransformer** - An sklearn transformer that provides the functionality to apply the signature method with some choice of variations as noted above.- **sktime.classification.feature_based.SignatureClassifier** - This provides a simple interface to append a classifier to the SignatureTransformer class.
###Code
from sktime.classification.feature_based import SignatureClassifier
from sktime.transformations.panel.signature_based import SignatureTransformer
###Output
_____no_output_____
###Markdown
Example 1: Sequential Data -> Signature Features.Here we will give a very simple example of converting the sequential 3D GunPoint data of shape [num_batch, series_length, num_features] -> [num_batch, signature_features].
###Code
# First build a very simple signature transform module
signature_transform = SignatureTransformer(
augmentation_list=("addtime",),
window_name="global",
window_depth=None,
window_length=None,
window_step=None,
rescaling=None,
sig_tfm="signature",
depth=3,
)
# The simply transform the stream data
print("Raw data shape is: {}".format(train_x.shape))
train_signature_x = signature_transform.fit_transform(train_x)
print("Signature shape is: {}".format(train_signature_x.shape))
###Output
Raw data shape is: (50, 1)
Signature shape is: (50, 14)
###Markdown
It then becomes extremely easy to build a time-series classification model. For example:
###Code
# Train
model = RandomForestClassifier()
model.fit(train_signature_x, train_y)
# Evaluate
test_signature_x = signature_transform.transform(test_x)
test_pred = model.predict(test_signature_x)
print("Accuracy: {:.3f}%".format(accuracy_score(test_y, test_pred)))
###Output
Accuracy: 0.747%
###Markdown
Example 2: Fine Tuning the Generalised ModelAs previously mentioned, in [1] the authors performed a large hyperparameter search over the signature variations on the full UEA archive to develop a 'Best Practices' approach to building a model. This required some fine tuning over the following parameters, as they were found to be very dataset specific: - `depth` over [1, 2, 3, 4, 5, 6]- `window_depth` over [2, 3, 4]- `RandomForestClassifier` hyperparamters.Here we show how this is easily done using the sktime framework.
###Code
from sklearn.model_selection import RandomizedSearchCV, StratifiedKFold
# Some params
n_cv_splits = 5
n_gs_iter = 20
# Random forests found to perform very well in general
classifier = RandomForestClassifier()
# The grid to be passed to an sklearn gridsearch
signature_grid = {
# Signature params
"depth": [1, 2, 3, 4, 5],
"window_name": ["dyadic"],
"augmentation_list": [["basepoint", "addtime"]],
"window_depth": [1, 2, 3, 4],
"rescaling": ["post"],
# Classifier and classifier params
"classifier": [classifier],
"classifier__n_estimators": [50, 100, 500],
"classifier__max_depth": [2, 4, 6, 8, 12, 16, 24, 32, 45, 60],
}
# Initialise the estimator
estimator = SignatureClassifier()
# Run a random grid search and return the gs object
cv = StratifiedKFold(n_splits=n_cv_splits)
gs = RandomizedSearchCV(estimator, signature_grid, cv=n_cv_splits, n_iter=n_gs_iter)
gs.fit(train_x, train_y)
# Get the best classifier
best_classifier = gs.best_estimator_
# Evaluate
train_preds = best_classifier.predict(train_x)
test_preds = best_classifier.predict(test_x)
train_score = accuracy_score(train_y, train_preds)
test_score = accuracy_score(test_y, test_preds)
print(
"Train acc: {:.3f}% | Test acc: {:.3f}%".format(
train_score * 100, test_score * 100
)
)
###Output
Train acc: 100.000% | Test acc: 94.667%
###Markdown
The Signature Method with SktimeThe ‘signature method’ refers to a collection of feature extraction techniques for multimodal sequential data, derived from the theory of controlled differential equations. In recent years, a large number of modifications have been suggested to the signature method so as to improve some aspect of it. In the paper ["A Generalised Signature Method for Time-Series"](https://arxiv.org/abs/2006.00873) [1] the authors collated the vast majority of these modifications into a single document and ran a large hyper-parameter study over the multivariate UEA datasets to build a generic signature algorithm that is expected to work well on a wide range of datasets. We implement the best practice results from this study as the default starting values for our hyperparameters in the `SignatureClassifier` module. The Path SignatureAt the heart of the signature method is the so-called "signature transform".A path $X$ of finite length in $\textit{d}$ dimensions can be described by the mapping $X:[a, b]\rightarrow\mathbb{R}$ $\!\!^d$, or in terms of co-ordinates $X=(X^1_t, X^2_t, ...,X^d_t)$, where each coordinate $X^i_t$ is real-valued and parameterised by $t\in[a,b]$.The **signature transform** $S$ of a path $X$ is defined as an infinite sequence of values:\begin{equation} S(X)_{a, b} = (1, S(X)_{a, b}^1, S(X)_{a, b}^2, ..., S(X)_{a, b}^d, S(X)_{a,b}^{1, 1}, S(X)_{a,b}^{1, 2}, ...), \label{eq:path_signature}\end{equation}where each term is a $k$-fold iterated integral of $X$ with multi-index $i_1,...,i_k$:\begin{equation} S(X)_{a, b}^{i_1,...,i_k} = \int_{a<t_k<b}...\int_{a<t_1<t_2} \mathrm{d}X_{t_1}^{i_1}...\mathrm{d}X_{t_k}^{i_k}. \label{eq:sig_moments}\end{equation}This defines a graded sequence of numbers associated with a path which is known to characterise it up to a generalised form of reparameterisation [2]. One can think of the signature as a collection of summary statistics that determine a path (almost) uniquely. Furthermore, any continuous function on the path $X$ can be approximated arbitrarily well as a linear function on its signature [3]; the signature unravels the non-linearities on functions on the space of unparameterised paths. A VisualisationTo give an idea of what the signature terms represent physically, we consider a patient in an ICU where we are tracking their systolic blood pressure (SBP) and heart rate (HR) changing in time. This can be represented as a path in $\mathbb{R}^3$ (assuming time is included as a channel).The plot above sketches two scenarios of how such a path might look. We are assuming here an implicit time dimension for each plot such that the path is traversed from left to right along the blue line. Depth 1:The signature terms to depth 1 are simply the changes of each of the variables over the interval, in the image this is the $\Delta \text{HR}$ and $\Delta \text{SBP}$ terms. Note that these values are the same in each case. Depth 2: The second level gives us the signed areas (the shaded orange regions), where the orientation of the left most plot is such that the negatively signed area is produced whereas the second gives the positive value, and thus, at order 2 in the signature we now have sufficient information to discriminate between these two situations where in the first rise in heart rate occurs before (or at least, initially faster than) the rise in blood pressure, and vice versa. Depth > 2: Depths larger than 2 become more difficult to visualise graphically, however the idea is similar to that of the depth 2 case where we saw that the signature produced information on whther the increase in HR or SBP appeared to be happening first, along with some numerical quantification of how much this was happening. At higher orders the signature is doing something similar, but now with three events, rather than two. The signature picks out structural information regarding the order in which events occur. The Signature in Time-Series AnalysisThe signature is a natural tool to apply in problems related to time-series analysis. As described above it can convert multi-dimensional time-series data into static features that represent information about the sequential nature of the time-series, that can be fed through a standard machine learning model. A simplistic view of how this works is as follows:\begin{equation} \text{Model}(\text{Signature}(\text{Sequential data}))) = \text{Predictions}\end{equation} Considered Signature VariationsAgain, following the work in [1] we group the variations on the siganture method conceptually into:- **Augmentations** - Transformation of an input sequence or time series into one or more new sequences, so that the signature will return different information about the path.- **Windows** - Windowing operations, so that the signature can act with some locality.- **Transforms** - The choice between the signature or the logsignature transformation.- **Rescalings** - Method of signature rescaling.This is neatly represented in the following graphic, where $\phi$ represents the augmentation, $W^{i, j}$ the windowing operation, $S^N$ the signature, and $\rho_{\text{pre}/\text{post}}$ the rescalig method. Please refer to the full paper for a more comprehensive exploration into what each of these groupings means. The Sktime ModulesWe now give an introduction to the classification and transformation modules included in th sktime interface, along with an example to show how to perform efficient hyperparameter optimisation that was found to give good results in [1].
###Code
# Some additional imports we will use
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sktime.datasets import load_gunpoint
# Load an example dataset
train_x, train_y = load_gunpoint(split="train", return_X_y=True)
test_x, test_y = load_gunpoint(split="test", return_X_y=True)
###Output
_____no_output_____
###Markdown
OverviewWe provide the following:- **sktime.transformers.series_as_features.signature_based.SignatureTransformer** - An sklearn transformer that provides the functionality to apply the signature method with some choice of variations as noted above.- **sktime.classification.signature_based.SignatureClassifier** - This provides a simple interface to append a classifier to the SignatureTransformer class.
###Code
from sktime.classification.signature_based import SignatureClassifier
from sktime.transformations.panel.signature_based import SignatureTransformer
###Output
_____no_output_____
###Markdown
Example 1: Sequential Data -> Signature Features.Here we will give a very simple example of converting the sequential 3D GunPoint data of shape [num_batch, series_length, num_features] -> [num_batch, signature_features].
###Code
# First build a very simple signature transform module
signature_transform = SignatureTransformer(
augmentation_list=("addtime",),
window_name="global",
window_depth=None,
window_length=None,
window_step=None,
rescaling=None,
sig_tfm="signature",
depth=3,
)
# The simply transform the stream data
print("Raw data shape is: {}".format(train_x.shape))
train_signature_x = signature_transform.fit_transform(train_x)
print("Signature shape is: {}".format(train_signature_x.shape))
###Output
Raw data shape is: (50, 1)
Signature shape is: (50, 14)
###Markdown
It then becomes extremely easy to build a time-series classification model. For example:
###Code
# Train
model = RandomForestClassifier()
model.fit(train_signature_x, train_y)
# Evaluate
test_signature_x = signature_transform.transform(test_x)
test_pred = model.predict(test_signature_x)
print("Accuracy: {:.3f}%".format(accuracy_score(test_y, test_pred)))
###Output
Accuracy: 0.753%
###Markdown
Example 2: Fine Tuning the Generalised ModelAs previously mentioned, in [1] the authors performed a large hyperparameter search over the signature variations on the full UEA archive to develop a 'Best Practices' approach to building a model. This required some fine tuning over the following parameters, as they were found to be very dataset specific: - `depth` over [1, 2, 3, 4, 5, 6]- `window_depth` over [2, 3, 4]- `RandomForestClassifier` hyperparamters.Here we show how this is easily done using the sktime framework.
###Code
from sklearn.model_selection import RandomizedSearchCV, StratifiedKFold
# Some params
n_cv_splits = 5
n_gs_iter = 20
# Random forests found to perform very well in general
classifier = RandomForestClassifier()
# The grid to be passed to an sklearn gridsearch
signature_grid = {
# Signature params
"depth": [1, 2, 3, 4, 5],
"window_name": ["dyadic"],
"augmentation_list": [["basepoint", "addtime"]],
"window_depth": [1, 2, 3, 4],
"rescaling": ["post"],
# Classifier and classifier params
"classifier": [classifier],
"classifier__n_estimators": [50, 100, 500],
"classifier__max_depth": [2, 4, 6, 8, 12, 16, 24, 32, 45, 60],
}
# Initialise the estimator
estimator = SignatureClassifier()
# Run a random grid search and return the gs object
cv = StratifiedKFold(n_splits=n_cv_splits)
gs = RandomizedSearchCV(estimator, signature_grid, cv=n_cv_splits, n_iter=n_gs_iter)
gs.fit(train_x, train_y)
# Get the best classifier
best_classifier = gs.best_estimator_
# Evaluate
train_preds = best_classifier.predict(train_x)
test_preds = best_classifier.predict(test_x)
train_score = accuracy_score(train_y, train_preds)
test_score = accuracy_score(test_y, test_preds)
print(
"Train acc: {:.3f}% | Test acc: {:.3f}%".format(
train_score * 100, test_score * 100
)
)
###Output
Train acc: 100.000% | Test acc: 97.333%
###Markdown
The Signature Method with SktimeThe ‘signature method’ refers to a collection of feature extraction techniques for multimodal sequential data, derived from the theory of controlled differential equations. In recent years, a large number of modifications have been suggested to the signature method so as to improve some aspect of it. In the paper ["A Generalised Signature Method for Time-Series"](https://arxiv.org/abs/2006.00873) [1] the authors collated the vast majority of these modifications into a single document and ran a large hyper-parameter study over the multivariate UEA datasets to build a generic signature algorithm that is expected to work well on a wide range of datasets. We implement the best practice results from this study as the default starting values for our hyperparameters in the `SignatureClassifier` module. The Path SignatureAt the heart of the signature method is the so-called "signature transform".A path $X$ of finite length in $\textit{d}$ dimensions can be described by the mapping $X:[a, b]\rightarrow\mathbb{R}$ $\!\!^d$, or in terms of co-ordinates $X=(X^1_t, X^2_t, ...,X^d_t)$, where each coordinate $X^i_t$ is real-valued and parameterised by $t\in[a,b]$.The **signature transform** $S$ of a path $X$ is defined as an infinite sequence of values:\begin{equation} S(X)_{a, b} = (1, S(X)_{a, b}^1, S(X)_{a, b}^2, ..., S(X)_{a, b}^d, S(X)_{a,b}^{1, 1}, S(X)_{a,b}^{1, 2}, ...), \label{eq:path_signature}\end{equation}where each term is a $k$-fold iterated integral of $X$ with multi-index $i_1,...,i_k$:\begin{equation} S(X)_{a, b}^{i_1,...,i_k} = \int_{a<t_k<b}...\int_{a<t_1<t_2} \mathrm{d}X_{t_1}^{i_1}...\mathrm{d}X_{t_k}^{i_k}. \label{eq:sig_moments}\end{equation}This defines a graded sequence of numbers associated with a path which is known to characterise it up to a generalised form of reparameterisation [2]. One can think of the signature as a collection of summary statistics that determine a path (almost) uniquely. Furthermore, any continuous function on the path $X$ can be approximated arbitrarily well as a linear function on its signature [3]; the signature unravels the non-linearities on functions on the space of unparameterised paths. A VisualisationTo give an idea of what the signature terms represent physically, we consider a patient in an ICU where we are tracking their systolic blood pressure (SBP) and heart rate (HR) changing in time. This can be represented as a path in $\mathbb{R}^3$ (assuming time is included as a channel).The plot above sketches two scenarios of how such a path might look. We are assuming here an implicit time dimension for each plot such that the path is traversed from left to right along the blue line. Depth 1:The signature terms to depth 1 are simply the changes of each of the variables over the interval, in the image this is the $\Delta \text{HR}$ and $\Delta \text{SBP}$ terms. Note that these values are the same in each case. Depth 2: The second level gives us the signed areas (the shaded orange regions), where the orientation of the left most plot is such that the negatively signed area is produced whereas the second gives the positive value, and thus, at order 2 in the signature we now have sufficient information to discriminate between these two situations where in the first rise in heart rate occurs before (or at least, initially faster than) the rise in blood pressure, and vice versa. Depth > 2: Depths larger than 2 become more difficult to visualise graphically, however the idea is similar to that of the depth 2 case where we saw that the signature produced information on whther the increase in HR or SBP appeared to be happening first, along with some numerical quantification of how much this was happening. At higher orders the signature is doing something similar, but now with three events, rather than two. The signature picks out structural information regarding the order in which events occur. The Signature in Time-Series AnalysisThe signature is a natural tool to apply in problems related to time-series analysis. As described above it can convert multi-dimensional time-series data into static features that represent information about the sequential nature of the time-series, that can be fed through a standard machine learning model. A simplistic view of how this works is as follows:\begin{equation} \text{Model}(\text{Signature}(\text{Sequential data}))) = \text{Predictions}\end{equation} Considered Signature VariationsAgain, following the work in [1] we group the variations on the siganture method conceptually into:- **Augmentations** - Transformation of an input sequence or time series into one or more new sequences, so that the signature will return different information about the path.- **Windows** - Windowing operations, so that the signature can act with some locality.- **Transforms** - The choice between the signature or the logsignature transformation.- **Rescalings** - Method of signature rescaling.This is neatly represented in the following graphic, where $\phi$ represents the augmentation, $W^{i, j}$ the windowing operation, $S^N$ the signature, and $\rho_{\text{pre}/\text{post}}$ the rescalig method. Please refer to the full paper for a more comprehensive exploration into what each of these groupings means. The Sktime ModulesWe now give an introduction to the classification and transformation modules included in th sktime interface, along with an example to show how to perform efficient hyperparameter optimisation that was found to give good results in [1].
###Code
# Some additional imports we will use
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sktime.datasets import load_gunpoint
# Load an example dataset
train_x, train_y = load_gunpoint(split="train", return_X_y=True)
test_x, test_y = load_gunpoint(split="test", return_X_y=True)
###Output
D:\CMP Machine Learning\sktime-workshop-boss\sktime\utils\data_io.py:63: FutureWarning: This function has moved to datasets/_data_io, this version will be removed in V0.10
warn(
###Markdown
OverviewWe provide the following:- **sktime.transformers.panel.signature_based.SignatureTransformer** - An sklearn transformer that provides the functionality to apply the signature method with some choice of variations as noted above.- **sktime.classification.feature_based.SignatureClassifier** - This provides a simple interface to append a classifier to the SignatureTransformer class.
###Code
from sktime.classification.feature_based import SignatureClassifier
from sktime.transformations.panel.signature_based import SignatureTransformer
###Output
_____no_output_____
###Markdown
Example 1: Sequential Data -> Signature Features.Here we will give a very simple example of converting the sequential 3D GunPoint data of shape [num_batch, series_length, num_features] -> [num_batch, signature_features].
###Code
# First build a very simple signature transform module
signature_transform = SignatureTransformer(
augmentation_list=("addtime",),
window_name="global",
window_depth=None,
window_length=None,
window_step=None,
rescaling=None,
sig_tfm="signature",
depth=3,
)
# The simply transform the stream data
print("Raw data shape is: {}".format(train_x.shape))
train_signature_x = signature_transform.fit_transform(train_x)
print("Signature shape is: {}".format(train_signature_x.shape))
###Output
Raw data shape is: (50, 1)
Signature shape is: (50, 14)
###Markdown
It then becomes extremely easy to build a time-series classification model. For example:
###Code
# Train
model = RandomForestClassifier()
model.fit(train_signature_x, train_y)
# Evaluate
test_signature_x = signature_transform.transform(test_x)
test_pred = model.predict(test_signature_x)
print("Accuracy: {:.3f}%".format(accuracy_score(test_y, test_pred)))
###Output
Accuracy: 0.740%
###Markdown
Example 2: Fine Tuning the Generalised ModelAs previously mentioned, in [1] the authors performed a large hyperparameter search over the signature variations on the full UEA archive to develop a 'Best Practices' approach to building a model. This required some fine tuning over the following parameters, as they were found to be very dataset specific: - `depth` over [1, 2, 3, 4, 5, 6]- `window_depth` over [2, 3, 4]- `RandomForestClassifier` hyperparamters.Here we show how this is easily done using the sktime framework.
###Code
from sklearn.model_selection import RandomizedSearchCV, StratifiedKFold
# Some params
n_cv_splits = 5
n_gs_iter = 20
# Random forests found to perform very well in general
estimator = RandomForestClassifier()
# The grid to be passed to an sklearn gridsearch
signature_grid = {
# Signature params
"depth": [1, 2, 3, 4, 5],
"window_name": ["dyadic"],
"augmentation_list": [["basepoint", "addtime"]],
"window_depth": [1, 2, 3, 4],
"rescaling": ["post"],
# Classifier and classifier params
"estimator": [estimator],
"estimator__n_estimators": [50, 100, 500],
"estimator__max_depth": [2, 4, 6, 8, 12, 16, 24, 32, 45, 60],
}
# Initialise the estimator
estimator = SignatureClassifier()
# Run a random grid search and return the gs object
cv = StratifiedKFold(n_splits=n_cv_splits)
gs = RandomizedSearchCV(estimator, signature_grid, cv=n_cv_splits, n_iter=n_gs_iter)
gs.fit(train_x, train_y)
# Get the best classifier
best_classifier = gs.best_estimator_
# Evaluate
train_preds = best_classifier.predict(train_x)
test_preds = best_classifier.predict(test_x)
train_score = accuracy_score(train_y, train_preds)
test_score = accuracy_score(test_y, test_preds)
print(
"Train acc: {:.3f}% | Test acc: {:.3f}%".format(
train_score * 100, test_score * 100
)
)
###Output
Train acc: 100.000% | Test acc: 96.000%
|
notebooks/figures/table5_thermo20c.ipynb | ###Markdown
Table 5 (Journal of Climate submission; Molina et al.) Table 5. Mean thermocline depth in meters, represented as the depth of the 20◦C isotherm (Kessler 1990),across the tropical Pacific during El Niño, La Niña, and mean climatology. ENSO events and climatology are derived from the years 800-1599 for the CESM1 control, 201-500 for Global and Pacific experiments, and 101-250 for the Pacific Salt experiment. **Table by: Maria J. Molina**
###Code
import xarray as xr
import matplotlib.pyplot as plt
import numpy as np
import cftime
from config import directory_figs, directory_data
#def pop_lon_indx():
# """
# Extract mask for the pacific slab region. Mask contains ones and nans. (Previous version for just nino3.4)
# """
# for_lon = xr.open_dataset('/glade/scratch/molina/amoc_exp/b.e11.B1850LENS.f09_g16.FWAtSalG02Sv.pop.h.SST.000101-005012.nc')
# mask = for_lon['SST'].where((for_lon['TLAT']<5) & (for_lon['TLAT']>-5) & (for_lon['TLONG']>-170+360) & (for_lon['TLONG']<-120+360),
# drop=False).isel(z_t=0, time=0).values
# return np.where(np.isnan(mask), np.nan, 1)
def pop_lon_indx():
"""
Extract mask for the pacslab region. Mask contains ones and nans.
"""
for_lon = xr.open_dataset(f'{directory_data}b.e11.B1850LENS.f09_g16.FWAtSalG02Sv.pop.h.SST.000101-005012.nc')
mask = for_lon['SST'].where((for_lon['TLAT']<10) & (for_lon['TLAT']>-10) & (for_lon['TLONG']>160) & (for_lon['TLONG']<-80+360),
drop=False).isel(z_t=0, time=0).values
return np.where(np.isnan(mask), np.nan, 1)
def compute_iso(threedim_array, mask):
"""
Create array of depth of isotherm using 3d iso array and 2d mask.
Args:
threedim_array (numpy array): Isotherm values.
mask (numpy array): Mask from pop_lon_indx.
Returns:
One dimensional array across Pacific slab region.
"""
newmask = np.nanmean(np.nanmean(threedim_array[:,:,:] * mask[np.newaxis,:,:], axis=0), axis=0) * 0.01
return newmask[~np.isnan(newmask)]
def for_time_series(threedim_array, mask):
"""
Create array of depth of isotherm using 3d iso array and 2d mask.
Args:
threedim_array (numpy array): Isotherm values.
mask (numpy array): Mask from pop_lon_indx.
Returns:
One dimensional array across Pacific slab region.
"""
newmask = np.nanmean(threedim_array[:,:,:] * mask[np.newaxis,:,:], axis=1) * 0.01
return newmask[~np.isnan(newmask)]
# grab lon indxs
lon_array_locs = pop_lon_indx()
# slab isotherms
iso20_g02sv = xr.open_dataset(
f'{directory_data}iso20c_FWAtSalG02Sv.nc').sel(
TIME=slice(cftime.DatetimeNoLeap(201, 1, 1, 0, 0),cftime.DatetimeNoLeap(500, 12, 1, 0, 0)))['DEPTH_OF_20C'].resample(TIME='QS-DEC').mean(skipna=True)
iso20_g04sv = xr.open_dataset(
f'{directory_data}iso20c_FWAtSalG04Sv.nc').sel(
TIME=slice(cftime.DatetimeNoLeap(201, 1, 1, 0, 0),cftime.DatetimeNoLeap(500, 12, 1, 0, 0)))['DEPTH_OF_20C'].resample(TIME='QS-DEC').mean(skipna=True)
iso20_p02sv = xr.open_dataset(
f'{directory_data}iso20c_FWAtSalP02Sv.nc').sel(
TIME=slice(cftime.DatetimeNoLeap(201, 1, 1, 0, 0),cftime.DatetimeNoLeap(500, 12, 1, 0, 0)))['DEPTH_OF_20C'].resample(TIME='QS-DEC').mean(skipna=True)
iso20_p04sv = xr.open_dataset(
f'{directory_data}iso20c_FWAtSalP04Sv.nc').sel(
TIME=slice(cftime.DatetimeNoLeap(201, 1, 1, 0, 0),cftime.DatetimeNoLeap(500, 12, 1, 0, 0)))['DEPTH_OF_20C'].resample(TIME='QS-DEC').mean(skipna=True)
iso20_psalt = xr.open_dataset(
f'{directory_data}iso20c_FWPaSalP04Sv.nc').sel(
TIME=slice(cftime.DatetimeNoLeap(101, 1, 1, 0, 0),cftime.DatetimeNoLeap(250, 12, 1, 0, 0)))['DEPTH_OF_20C'].resample(TIME='QS-DEC').mean(skipna=True)
iso20_cntrl = xr.open_dataset(
f'{directory_data}iso20c_005.nc').sel(
TIME=slice(cftime.DatetimeNoLeap(800, 1, 1, 0, 0),cftime.DatetimeNoLeap(1599, 12, 1, 0, 0)))['DEPTH_OF_20C'].resample(TIME='QS-DEC').mean(skipna=True)
iso20_g02sv = iso20_g02sv[iso20_g02sv['TIME.month']==12].values
iso20_g04sv = iso20_g04sv[iso20_g04sv['TIME.month']==12].values
iso20_p02sv = iso20_p02sv[iso20_p02sv['TIME.month']==12].values
iso20_p04sv = iso20_p04sv[iso20_p04sv['TIME.month']==12].values
iso20_psalt = iso20_psalt[iso20_psalt['TIME.month']==12].values
iso20_cntrl = iso20_cntrl[iso20_cntrl['TIME.month']==12].values
iso20_g02sv = compute_iso(iso20_g02sv, lon_array_locs)
iso20_g04sv = compute_iso(iso20_g04sv, lon_array_locs)
iso20_p02sv = compute_iso(iso20_p02sv, lon_array_locs)
iso20_p04sv = compute_iso(iso20_p04sv, lon_array_locs)
iso20_psalt = compute_iso(iso20_psalt, lon_array_locs)
iso20_cntrl = compute_iso(iso20_cntrl, lon_array_locs)
nino_iso20 = xr.open_dataset(f'{directory_data}ninoslabs_DEPTH_OF_20C.nc')
nina_iso20 = xr.open_dataset(f'{directory_data}ninaslabs_DEPTH_OF_20C.nc')
print(np.around(iso20_cntrl.mean(),1))
print(np.around(iso20_g02sv.mean(),1))
print(np.around(iso20_g04sv.mean(),1))
print(np.around(iso20_p02sv.mean(),1))
print(np.around(iso20_p04sv.mean(),1))
print(np.around(iso20_psalt.mean(),1))
print(np.around(nino_iso20['cntrl_nino'].mean().values,1))
print(np.around(nino_iso20['g02sv_nino'].mean().values,1))
print(np.around(nino_iso20['g04sv_nino'].mean().values,1))
print(np.around(nino_iso20['p02sv_nino'].mean().values,1))
print(np.around(nino_iso20['p04sv_nino'].mean().values,1))
print(np.around(nino_iso20['psalt_nino'].mean().values,1))
print(np.around(nina_iso20['cntrl_nina'].mean().values,1))
print(np.around(nina_iso20['g02sv_nina'].mean().values,1))
print(np.around(nina_iso20['g04sv_nina'].mean().values,1))
print(np.around(nina_iso20['p02sv_nina'].mean().values,1))
print(np.around(nina_iso20['p04sv_nina'].mean().values,1))
print(np.around(nina_iso20['psalt_nina'].mean().values,1))
###Output
123.5
120.1
125.3
121.5
123.7
124.9
|
python/jupyter_snippets/enumerate_zip_for_loop.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/gmihaila/snippets_py/blob/master/enumerate_zip_for_loop.ipynb) How to enumerate and zip in same for looop
###Code
a = [1,2,3,4,5]
b = [1,5,3,2,6]
print([i for i, (x,y) in enumerate(zip(a,b)) if x == y])
###Output
[0, 2]
|
notebooks/explanatory.ipynb | ###Markdown
Data Expo 2009 - Airline on-time performance by (mahmoud lotfi) key findings:01. Flights status distribution in 200802. Flight delay reasons and how much they contribute to the number of delayed flights03. The worest day to travel.04. Which Airlines have the longest delays?05. What is the worst time of day to travel? insights :01. It turn out most of flight arrive early , nearly 70%.02. We also see that the weather is not the main reason for delays. Weather only contributes to 2% of the delays.03. The wostest arrival time delay ocurr Tuesday for most of carriers04. B6 had the worst for arrivals delay and the lowest departure delay, while AQ, HA and YV has the lowest arravial delay05. unstable arrival delay between 12:00 AM to 05:00 AM, a huge crowd gathered between 05:00 AM to 06:00 AM Dataset Overview- The data consists of flight arrival and departure details for all commercial flights within the USA, from October 1987 to April 2008. - This is a large dataset: there are nearly 120 million records in total, and takes up 1.6 gigabytes of space compressed and 12 gigabytes when uncompressed. - The data comes originally from RITA where it is described in detail. - the data in bzipped csv file. - These files have derivable variables removed, are packaged in yearly chunks and have been more heavily compressed than the originals.- in this project we will discuss flight delay for __2008__ data set
###Code
# import all packages and set plots to be embedded inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import time
%matplotlib inline
# suppress warnings from final output
import warnings
warnings.simplefilter("ignore")
# reset seaborn settings
sns.reset_orig()
# set plotting color
base_color = sns.color_palette()[0]
# load in the dataset into a pandas dataframe
t1 = time.time()
flights = pd.read_csv('../data/processed/{}.csv'.format('flights'))
t2 = time.time()
print('Elapsed loading time :', t2-t1)
# load in the dataset into a pandas dataframe
t1 = time.time()
cancelled = pd.read_csv('../data/processed/{}.csv'.format('cancelled'))
t2 = time.time()
print('Elapsed loading time :', t2-t1)
# load in the dataset into a pandas dataframe
t1 = time.time()
diverted = pd.read_csv('../data/processed/{}.csv'.format('diverted'))
t2 = time.time()
print('Elapsed loading time :', t2-t1)
###Output
Elapsed loading time : 0.047000885009765625
###Markdown
Flights status distribution in 2008
###Code
flights_early_arrival= flights[flights['ArrDelay']<0].shape[0]
flights_ontime_arrival = flights[(flights['ArrDelay']<=15)& (flights['ArrDelay']>=0)].shape[0]
flights_late_arrival = flights[(flights['ArrDelay']>15)].shape[0]
flights_cancelled = cancelled.shape[0]
flights_diverted = diverted.shape[0]
flights_early_arrival = round(flights_early_arrival/num_of_overall,2)*100
flights_ontime_arrival = round(flights_ontime_arrival/num_of_overall,2)*100
flights_late_arrival = round(flights_late_arrival/num_of_overall,2)*100
flights_cancelled = round(flights_cancelled/num_of_overall,2)*100
flights_diverted = round(flights_diverted/num_of_overall,2)*100
num_of_overall = flights_early_arrival + flights_ontime_arrival + flights_late_arrival + flights_cancelled + flights_diverted
Y = [flights_early_arrival, flights_ontime_arrival, flights_late_arrival, flights_cancelled, flights_diverted]
Y
plt.figure(figsize=(9,5))
X = ['early' , 'on time','late','cancelled','diverted']
Y = [flights_early_arrival, flights_ontime_arrival, flights_late_arrival, flights_cancelled, flights_diverted]
plt.bar(X, Y, width = 0.4);
plt.xlabel("Percentage");
plt.ylabel("Flight Status");
plt.title("Percentage Of Flights Status");
plt.grid();
plt.figure(figsize=(12,10))
plt.pie(Y, labels=X , explode=[0,0,0.3,0.3,0], autopct='%1.2f%%');
plt.ylabel("Flight Status");
plt.title("Percentage Of Flights Status");
###Output
_____no_output_____
###Markdown
> It turn out most of flight arrive early , nearly 70% Flight delay reasons and how much they contribute to the number of delayed flights
###Code
df_row = pd.merge(flights.groupby(['UniqueCarrier']).agg(UniqueCarrierCount=('UniqueCarrier', 'count')),
flights[(flights['ArrDelay'] > 15)].groupby(['UniqueCarrier']).agg(ArrDelaycount=('ArrDelay', 'count')),
on='UniqueCarrier', how='inner')
df_delay = flights[(flights['ArrDelay'] > 15)]
df_delay = df_delay[(flights['LateAircraftDelay'] != 0) | (flights['SecurityDelay'] != 0) |
(flights['NASDelay'] != 0) | (flights['WeatherDelay'] != 0) |
(flights['CarrierDelay'] != 0) | (flights['WeatherDelay'] != 0)]
df_delay['LateAircraftDelay'] = df_delay['LateAircraftDelay'].apply(lambda x: 1 if x else 0)
df_delay['SecurityDelay'] = df_delay['SecurityDelay'].apply(lambda x: 1 if x else 0)
df_delay['NASDelay'] = df_delay['NASDelay'].apply(lambda x: 1 if x else 0)
df_delay['WeatherDelay'] = df_delay['WeatherDelay'].apply(lambda x: 1 if x else 0)
df_delay['CarrierDelay'] = df_delay['CarrierDelay'].apply(lambda x: 1 if x else 0)
df_delay = df_delay.groupby(['UniqueCarrier']).agg(
# Get count of the column for each group
LateAircraftDelayCount=('LateAircraftDelay', sum),
# Get count of the column for each group
SecurityDelayCount=('SecurityDelay', sum),
# Get count of the column for each group
NASDelayCount=('NASDelay', sum),
# Get count of the column for each group
WeatherDelayCount=('WeatherDelay', sum),
# Get count of the column for each group
CarrierDelayCount=('CarrierDelay', sum),
)
df_delay = pd.merge(df_delay, df_row, on='UniqueCarrier', how='inner')
df_delay['pct'] = (df_delay['ArrDelaycount'].astype(float)/df_delay['UniqueCarrierCount'].astype(float)).round(2)
sns.pairplot(df_delay, size = 2.5);
#correlation matrix
corrmat = df_delay.corr()
f, ax = plt.subplots(figsize=(15, 10))
sns.heatmap(corrmat, vmin=-1, square=True, annot=True, fmt='.2f', cmap='vlag_r', center=0);
df_delay['pct'] = (df_delay['ArrDelaycount'].astype(float)/df_delay['UniqueCarrierCount'].astype(float)).round(2)
df_delay.head()
wethear_sum = df_delay['WeatherDelayCount'].sum()
nas_sum = df_delay['NASDelayCount'].sum()
carrier_sum = df_delay['CarrierDelayCount'].sum()
security_sum = df_delay['SecurityDelayCount'].sum()
aircraft_sum = df_delay['LateAircraftDelayCount'].sum()
#ArrDelaycount_sum = df_delay['ArrDelaycount'].sum()
num_of_overall = wethear_sum + nas_sum + carrier_sum + security_sum + aircraft_sum
wethear_sum = (wethear_sum/num_of_overall).round(2)*100
nas_sum = (nas_sum/num_of_overall).round(2)*100
carrier_sum = (carrier_sum/num_of_overall).round(2)*100
security_sum = (security_sum/num_of_overall).round(2)*100
aircraft_sum = (aircraft_sum/num_of_overall).round(2)*100
plt.figure(figsize=(10,7))
X = ['weather delays' , 'nas delays','carrier delays','late air craft','secuirity delays']
Y = [wethear_sum, nas_sum, carrier_sum, aircraft_sum, security_sum]
plt.pie(Y, labels=X , explode=[0.09,0,0,0,0.3], autopct='%1.2f%%');
plt.title('Contribution of each Delay reason to overall delays');
###Output
_____no_output_____
###Markdown
>We also see that the weather is not the main reason for delays. Weather only contributes to 2% of the delays. The worest day to travel
###Code
flights
longest = flights[flights['ArrDelay']>15].groupby(['UniqueCarrier', 'DayOfWeek'])\
.agg(DepDelayMean=('DepDelay', 'mean'), ArrDelayMean=('ArrDelay', 'mean'))\
.sort_values('ArrDelayMean', ascending=False)
longest = longest.reset_index()
plt.figure(figsize=(15,7))
ax = plt.gca()
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
sns.lineplot(data=longest, x="UniqueCarrier", y="ArrDelayMean", hue="DayOfWeek", ax=ax)
plt.title("Distribution of Arrival Delay Mean over days of week with Unique Carrier");
plt.ylim(top=24.1); # adjust the top leaving bottom unchanged
plt.ylim(bottom=21); # adjust the bottom leaving top unchanged
plt.grid()
###Output
_____no_output_____
###Markdown
> The wostest arrival time delay ocurr Tuesday for most of carriers
###Code
plt.figure(figsize=(10,7))
ax = plt.gca()
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
sns.scatterplot(data=longest, x="UniqueCarrier", y="ArrDelayMean", hue="DayOfWeek", ax=ax)
plt.grid()
###Output
_____no_output_____
###Markdown
Which Airlines have the longest delays
###Code
longest = flights[flights['ArrDelay']>15].groupby('UniqueCarrier')\
.agg(DepDelayMean=('DepDelay', 'mean'), ArrDelayMean=('ArrDelay', 'mean'))\
.sort_values('ArrDelayMean', ascending=False)
longest = longest.reset_index()
plt.figure(figsize=(12,5))
ax = plt.gca()
sns.lineplot(data=longest, x="UniqueCarrier", y="ArrDelayMean", ax=ax, legend=False, label='Arrive Delay')
ax2 = ax.twinx()
sns.lineplot(data=longest, x="UniqueCarrier", y="DepDelayMean", ax=ax2, legend=False, color="r", label='Depature Delay')
plt.title("Distribution of Mean Arrival Delay with Unique Carrier");
ax.figure.legend();
#plt.grid()
###Output
_____no_output_____
###Markdown
> B6 had the worst for arrivals delay and the lowest departure delay, while AQ, HA and YV has the lowest arravial delaY What is the worst time of day to travel?
###Code
flights['hours'] = pd.to_datetime(flights['CRSDepTime'], format='%I:%M %p').dt.hour
flights['minutes'] = pd.to_datetime(flights['CRSDepTime'], format='%I:%M %p').dt.minute
flights = flights.sort_values(['hours', 'minutes'])
flights = flights.drop(columns=['minutes'])
df = flights[(flights['ArrDelay'] > 15)].groupby(['hours', 'DayOfWeek'])\
.agg(ArrDelayCount=('ArrDelay', 'count'), ArrDelaysum=('ArrDelay', 'mean'))
df = df.reset_index()
df
df.plot(x='hours', y='ArrDelaysum')
plt.scatter(data = df, x='hours', y='ArrDelaysum');
plt.colorbar();
# substitute number with actual day of week name
hr = ['00:00 AM', '01:00 AM', '02:00 AM', '03:00 AM', '04:00 AM', '05:00 AM',
'06:00 AM', '07:00 AM', '08:00 AM', '09:00 AM', '10:00 AM', '11:00 AM','12:00 PM',
'01:00 PM', '02:00 PM', '03:00 PM', '04:00 PM', '05:00 PM', '06:00 PM', '07:00 PM',
'08:00 PM', '09:00 PM', '10:00 PM', '11:00 PM']
for i in df.hours.unique():
if str(i).isnumeric():
df.hours.replace(i,hr[int(i)], inplace=True)
plt.figure(figsize=(15,5))
ax = plt.gca()
plt.title("Distribution Of average Arrival Delay of Flights over day hours")
plt.xticks(ticks = df.hours.index, rotation=45)
for day in df.DayOfWeek.unique():
sns.lineplot(data = df[df['DayOfWeek'] == day], x='hours', y='ArrDelaysum', ax=ax, legend=False, label=day)
plt.grid()
ax.legend();
plt.figure(figsize=(15,5))
ax = plt.gca()
plt.title("Distribution Of average Arrival Delay of Flights over day hours")
plt.xticks(ticks = df.hours.index, rotation=45)
for day in df.DayOfWeek.unique():
sns.lineplot(data = df[df['DayOfWeek'] == day], x='hours', y='ArrDelayCount', ax=ax, legend=False, label=day)
plt.grid()
ax.legend();
###Output
_____no_output_____
###Markdown
- unstable arrival delay between 12:00 AM to 05:00 AM- a huge crowd gathered between 05:00 AM to 06:00 AM
###Code
plt.figure(figsize=(15,7))
ax = plt.gca()
plt.xticks(ticks = df.hours.index, rotation=45)
plt.scatter(data = df[df['DayOfWeek'] == 'Friday'], x='hours', y='ArrDelaysum');
plt.title("Distribution Of Number of Flights over day hours")
plt.grid()
plt.figure(figsize=(15,7))
ax = plt.gca()
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
df.plot.bar(ax=ax);
plt.ylabel('count')
plt.yscale('log')
plt.grid()
###Output
_____no_output_____
###Markdown
> Once you're ready to finish your presentation, check your output by usingnbconvert to export the notebook and set up a server for the slides. From theterminal or command line, use the following expression:> > `jupyter nbconvert .ipynb --to slides --post serve --template output_toggle`> This should open a tab in your web browser where you can scroll through yourpresentation. Sub-slides can be accessed by pressing 'down' when viewing its parentslide. Make sure you remove all of the quote-formatted guide notes like this onebefore you finish your presentation!
###Code
!jupyter nbconvert ./explanatory.ipynb --to slides --template ./output_toggle.tpl --post serve
!jupyter nbconvert explanatory.ipynb --to slides --post serve --no-input --no-prompt
###Output
[NbConvertApp] Converting notebook explanatory.ipynb to slides
[NbConvertApp] Writing 1014756 bytes to explanatory.slides.html
[NbConvertApp] Redirecting reveal.js requests to https://cdnjs.cloudflare.com/ajax/libs/reveal.js/3.5.0
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\Scripts\jupyter-nbconvert-script.py", line 10, in <module>
sys.exit(main())
File "C:\ProgramData\Anaconda3\lib\site-packages\jupyter_core\application.py", line 254, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\traitlets\config\application.py", line 845, in launch_instance
app.start()
File "C:\ProgramData\Anaconda3\lib\site-packages\nbconvert\nbconvertapp.py", line 350, in start
self.convert_notebooks()
File "C:\ProgramData\Anaconda3\lib\site-packages\nbconvert\nbconvertapp.py", line 524, in convert_notebooks
self.convert_single_notebook(notebook_filename)
File "C:\ProgramData\Anaconda3\lib\site-packages\nbconvert\nbconvertapp.py", line 491, in convert_single_notebook
self.postprocess_single_notebook(write_results)
File "C:\ProgramData\Anaconda3\lib\site-packages\nbconvert\nbconvertapp.py", line 463, in postprocess_single_notebook
self.postprocessor(write_results)
File "C:\ProgramData\Anaconda3\lib\site-packages\nbconvert\postprocessors\base.py", line 28, in __call__
self.postprocess(input)
File "C:\ProgramData\Anaconda3\lib\site-packages\nbconvert\postprocessors\serve.py", line 90, in postprocess
http_server.listen(self.port, address=self.ip)
File "C:\ProgramData\Anaconda3\lib\site-packages\tornado\tcpserver.py", line 151, in listen
sockets = bind_sockets(port, address=address)
File "C:\ProgramData\Anaconda3\lib\site-packages\tornado\netutil.py", line 161, in bind_sockets
sock.bind(sockaddr)
OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
|
src/BLM_4.ipynb | ###Markdown
BLM Activity and Sentiment AnalysisSteps:1. Divide each major period into 3 sub-periods using change point analysis. This was performed in `tweet_counts.ipynb`.2. For each period... * Process all tweets * Process each tweet through `TweetsManager.process_tweet()` * Call `TweetsManager.process_deferred_interactions()` to handle retweet/reply corner cases. * Call `TweetsManager.analyze_graph()` to detect communities. * Discard too-small communities. * Assign BLM stance to communities. * Generate reports on largest communities. * Manually assign stance to those communities. * Build/use prediction model for remaining communities. * Save everything.
###Code
from collections import Counter, defaultdict
import json
import numpy as np
from os import path
import pandas as pd
import re
from string import Template
from typing import List
from elasticsearch import Elasticsearch as ES
from elasticsearch.helpers import scan
from blm_activity_db import BlmActivityDb
from community_classifier import get_blm_classifier, get_three_class_classifier
from community_report import generate_init_community_report
from tweet_mgr import TweetsManager, CommunityActivity, Stance
from tweet_sentiment import EmoScores, PronounCounts, SentimentAnalysis
# Configurable parameters
start_date = "2020-01-01"
end_date = "2020-05-24"
es_idx = 'tweets2'
# Template for query_body
#query_body = {
# "query": {
# "range": {
# "doc.created_at": {
# "gte": "Sun Nov 23 00:00:00 +0000 2014",
# "lt": "Wed Dec 24 00:00:00 +0000 2014"
# }
# }
# }
#}
query_body = {
"query": {
"range": {
"doc.created_at": {
"gte": "Wed Jan 01 00:00:00 +0000 2020",
"lt": "Mon May 25 00:00:00 +0000 2020"
}
}
}
}
period = 4
num_init_communities = 40
num_exemplars = 10
tm = TweetsManager()
es = ES(hosts=["localhost"])
scan_iter = scan(es, index=es_idx, query=query_body)
for result in scan_iter:
tweet = result['_source']
tm.process_tweet(tweet)
def get_tweet_text_by_id(id_):
doc = es.get(index=es_idx, id=id_)
return doc['_source']['doc']['text']
tm.process_deferred_interactions()
tm.analyze_graph(n_iterations=10)
community_size = Counter() # num_accounts -> num of communities having that size
unique_tweets = Counter() # num_unique_tweets -> num of communities with that activity
for _, accounts in tm.community_user_map.items():
community_size[len(accounts)] += 1
for c_activity in tm.community_activity_map.values():
num_unique_tweets = c_activity.num_tweets - len(c_activity.retweet_sentiment_analyses)
unique_tweets[num_unique_tweets] += 1
total_unique_tweets = {k: k*v for k, v in unique_tweets.items()}
for k, v in total_unique_tweets.items():
print(k, ": ", v)
print("total tweets:", len(tm.tweets))
print("total unique tweets:", sum(v for v in total_unique_tweets.values()))
print(len(tm.user_community_map), "accounts in", len(tm.community_user_map), "communities")
print("If 10 used as threshold for unique tweets per community...")
num_communities = sum(v for k, v in unique_tweets.items() if k >= 10)
num_unique_tweets = sum(v for k, v in total_unique_tweets.items() if k >= 10)
print(num_communities, "communities and", num_unique_tweets, "unique tweets remain.")
tm.filter_low_activity_communities(unique_tweets_threshold=10)
# Get Initial Report
report_dir = f'../data/Reports/{period}/'
report_name = "Largest Communities Hashtags and Tweets"
report_path = report_dir + f"{report_name}.md"
report = f"# {report_name} in Period {period}\n\n"
# Add section for each of top num_init_communities by membership count
comm_user_counts = sorted(tm.community_user_map.items(), key=lambda x: len(x[1]), reverse=True)
# derive inter-community replies, retweets
top_community_ids = set(x[0] for i, x in enumerate(comm_user_counts) if i < num_init_communities)
reply_counter = Counter()
replied_to_counter = Counter()
retweeted_counter = Counter()
for (_, comm_reply), count in tm.inter_comm_reply_counter.items():
if comm_reply.replying in top_community_ids:
reply_counter[comm_reply.replying] += count
if comm_reply.replied_to in top_community_ids:
replied_to_counter[comm_reply.replied_to] += count
for (_, comm_retweet), count in tm.inter_comm_retweet_counter.items():
if comm_retweet.retweeted in top_community_ids:
retweeted_counter[comm_retweet.retweeted] += count
# extract other community metrics, create report section
for k, (comm_id, members) in enumerate(comm_user_counts):
if k == num_init_communities:
break
num_members = len(members)
ca: CommunityActivity = tm.community_activity_map[comm_id]
num_tweets = ca.num_tweets
num_retweets = sum(count for count in ca.retweet_counter.values())
# influence ranks
ranks = []
for member in members:
ranks.append(tm.user_activity[member].influence_rank)
top10_influence_ranks = sorted(ranks)[:10]
# memes
hashtags = []
ht_counts = []
meme_counts = sorted(ca.meme_counter.items(), key=lambda x:x[1], reverse=True)
for i, (tag, count) in enumerate(meme_counts):
if i == num_exemplars:
break
hashtags.append(tag)
ht_counts.append(count)
# retweets
tweet_ids = []
rt_counts = []
retweet_counts = sorted(ca.retweet_counter.items(), key=lambda x:x[1], reverse=True)
for i, (tweet_id, count) in enumerate(retweet_counts):
if i == num_exemplars:
break
tweet_ids.append(tweet_id)
rt_counts.append(count)
rts = []
for id_ in tweet_ids:
if id_ in tm.tweets:
rts.append(tm.tweets[id_])
else:
rts.append(get_tweet_text_by_id(id_))
report += generate_init_community_report(
comm_id,
num_members,
num_tweets,
num_retweets,
hashtags,
ht_counts,
rts,
rt_counts,
retweeted_counter[comm_id],
reply_counter[comm_id],
replied_to_counter[comm_id],
top10_influence_ranks,
)
with open(report_path, 'w', encoding="utf-8") as f:
f.write(report)
counter_comm_ids = [46]
excluded_comm_ids = [12, 22, 28]
# blm_comm_ids = [i for i in range(num_init_communities) if not i in counter_comm_ids]
blm_comm_ids = [x[0] for i, x in enumerate(comm_user_counts)
if i < num_init_communities and
x[0] not in counter_comm_ids and
x[0] not in excluded_comm_ids]
def get_tweet_texts(tweet_ids, tm):
texts = []
for id_ in tweet_ids:
if id_ in tm.tweets:
texts.append(tm.tweets[id_])
else:
texts.append(get_tweet_text_by_id(id_))
return texts
blm_retweet_set = set()
counter_retweet_set = set()
excluded_retweet_set = set()
for k, (comm_id, _ ) in enumerate(comm_user_counts):
if k == num_init_communities:
break
num_exemplars = 288 if comm_id not in blm_comm_ids else 18
retweet_counts = sorted(
tm.community_activity_map[comm_id].retweet_counter.items(),
key=lambda x:x[1],
reverse=True
)
for i, (tweet_id, _ ) in enumerate(retweet_counts):
if i == num_exemplars:
break
if comm_id in counter_comm_ids:
counter_retweet_set.add(tweet_id)
elif comm_id in excluded_comm_ids:
excluded_retweet_set.add(tweet_id)
else:
blm_retweet_set.add(tweet_id)
blm_retweets = get_tweet_texts(blm_retweet_set, tm)
counter_retweets = get_tweet_texts(counter_retweet_set, tm)
excluded_retweets = get_tweet_texts(excluded_retweet_set, tm)
blm_clf, cv_results = get_three_class_classifier(blm_retweets, counter_retweets, excluded_retweets)
results_df = pd.DataFrame(cv_results)
print(results_df)
num_samples_per_comm = 20
for k, (comm_id, _ ) in enumerate(comm_user_counts):
if k < num_init_communities:
continue
sample_tweet_ids = []
counts = []
retweet_counts = sorted(
tm.community_activity_map[comm_id].retweet_counter.items(),
key=lambda x:x[1],
reverse=True
)
for i, (tweet_id, count) in enumerate(retweet_counts):
if i == num_samples_per_comm:
break
sample_tweet_ids.append(tweet_id)
counts.append(count)
if sum(counts) < num_samples_per_comm:
excluded_comm_ids.append(comm_id)
continue
sample_tweets = get_tweet_texts(sample_tweet_ids, tm)
stance_predictions = blm_clf.predict(sample_tweets)
weighted_sum = np.dot(np.array(counts), np.array(stance_predictions))
stance_probability = weighted_sum / sum(counts)
if stance_probability < -0.3:
counter_comm_ids.append(comm_id)
elif stance_probability > 0.3:
blm_comm_ids.append(comm_id)
else:
excluded_comm_ids.append(comm_id)
print(f"counter communities: {counter_comm_ids}")
print(f"{len(excluded_comm_ids)} unknown communities")
print(f"{len(blm_comm_ids)} BLM communities")
for id_ in counter_comm_ids:
tm.community_activity_map[id_].stance = Stance.CounterProtest
for id_ in excluded_comm_ids:
tm.community_activity_map[id_].stance = Stance.Unknown
for id_ in blm_comm_ids:
tm.community_activity_map[id_].stance = Stance.Protest
len(tm.community_user_map), len(tm.community_activity_map)
set(k for k in tm.community_activity_map) - set(k for k in tm.community_user_map)
for i, (c_id, accts) in enumerate(comm_user_counts):
if i == num_init_communities:
break
print(c_id, len(accts))
db = BlmActivityDb()
db.save_tweets_mgr(tm, period)
num_examples = 25 # i.e, number of tweets or memes to display
movement_template = Template('''
## MOVEMENT $movement
Communities: $num_communities
Members: $num_members
Retweets: $num_retweets
Tweets: $num_tweets
### Top Hashtags
| Count | Hashtag |
|------:|:------|
$hashtag_list
### Top Retweets
| Count | Tweet |
|------:|:------|
$retweet_list
### Sentiment
All Tweet Polarity = $at_polarity
Retweet Polarity = $rt_polarity
### Emotions
| Emotion | All Tweets | Retweets |
|------:|:------:|:-------|
$emo_list
### Pronoun Usage
| Person | All Tweets | Retweets |
|------:|:------:|:-------|
$pronoun_columns
''')
def printable_top_hashtag_list(meme_counter):
top_memes = sorted(meme_counter.items(), key = lambda x: x[1], reverse = True)
hashtag_list = ""
for i, (ht, count) in enumerate(top_memes):
if i == num_examples:
break
hashtag_list += f"| {count} | {ht} |\n"
return hashtag_list
line_feeds = re.compile("[\r\n]")
def printable_top_retweet_list(retweet_counter):
top_retweets = sorted(retweet_counter.items(), key = lambda x: x[1], reverse = True)
retweet_list = ""
for i, (tweet_id, count) in enumerate(top_retweets):
if i == num_examples:
break
tweet = get_tweet_text_by_id(tweet_id)
tweet = line_feeds.sub('', tweet)
retweet_list += f"| {count} | {tweet} |\n"
return retweet_list
def printable_emo_scores_columns(left: EmoScores, right: EmoScores):
emo_list = ""
emo_list += f"| trust | {round(left.trust, 3)} | {round(right.trust, 3)} |\n"
emo_list += f"| anticipation | {round(left.anticipation, 3)} | {round(right.anticipation, 3)} |\n"
emo_list += f"| joy | {round(left.joy, 3)} | {round(right.joy, 3)} |\n"
emo_list += f"| surprise | {round(left.surprise, 3)} | {round(right.surprise, 3)} |\n"
emo_list += f"| anger | {round(left.anger, 3)} | {round(right.anger, 3)} |\n"
emo_list += f"| disgust | {round(left.disgust, 3)} | {round(right.disgust, 3)} |\n"
emo_list += f"| fear | {round(left.fear, 3)} | {round(right.fear, 3)} |\n"
emo_list += f"| sadness | {round(left.sadness, 3)} | {round(right.sadness, 3)} |\n"
return emo_list
def printable_pronoun_usage_columns(left: PronounCounts, right: PronounCounts):
printable_columns = ""
printable_columns += f"| First Singular | {round(left.first_singular, 3)} | {round(right.first_singular, 3)} |\n"
printable_columns += f"| First Plural | {round(left.first_plural, 3)} | {round(right.first_plural, 3)} |\n"
printable_columns += f"| Second | {round(left.second,3)} | {round(right.second, 3)} |\n"
printable_columns += f"| Third | {round(left.third, 3)} | {round(right.third, 3)} |\n"
return printable_columns
def store_movement_reports(movement, report_dir, comm_ids, tm):
'''Write files with salient data on BLM or counter movement during a period
Parameters:
-----------
movement : str
"BLM" or "Counter"
report_dir : str
FS directory where reports are to be written
comm_ids : list of int
IDs of communities in movement
tm : TweetManager instance
'''
# movement stats
## counts
num_communities = len(comm_ids)
if num_communities == 0:
return
num_members = 0
total_tweets = 0
total_retweets = 0
meme_counter = Counter()
retweet_counter = Counter()
retweet_pc, retweet_emo, retweet_sentiment = PronounCounts(), EmoScores(), 0.0
all_tweet_pc, all_tweet_emo, all_tweet_sentiment = PronounCounts(), EmoScores(), 0.0
for community_id in comm_ids:
num_members += len(tm.community_user_map[community_id])
c_activity = tm.community_activity_map[community_id]
total_tweets += c_activity.num_tweets
for tweet_id, count in c_activity.retweet_counter.items():
total_retweets += count
retweet_counter[tweet_id] += count
for meme, count in c_activity.meme_counter.items():
meme_counter[meme] += count
rss = c_activity.retweet_sentiment_summary
num_retweets = len(c_activity.retweet_sentiment_analyses)
retweet_pc += rss.pronoun_counts * num_retweets
retweet_emo += rss.emo_scores * num_retweets
retweet_sentiment += rss.sentiment * num_retweets
atss = c_activity.all_sentiment_summary
all_tweet_pc += atss.pronoun_counts * c_activity.num_tweets
all_tweet_emo += atss.emo_scores * c_activity.num_tweets
all_tweet_sentiment += atss.sentiment * c_activity.num_tweets
retweet_pc /= total_retweets
retweet_emo /= total_retweets
retweet_sentiment /= total_retweets
all_tweet_pc /= total_tweets
all_tweet_emo /= total_tweets
all_tweet_sentiment /= total_tweets
## 25 most important hashtags
hashtag_list = printable_top_hashtag_list(meme_counter)
## 25 most retweeted
retweet_list = printable_top_retweet_list(retweet_counter)
## emotions
emo_list = printable_emo_scores_columns(left=all_tweet_emo, right=retweet_emo)
## Write to file
subs = {
"movement": movement,
"num_communities": num_communities,
"num_members": num_members,
"num_tweets": total_tweets,
"num_retweets": total_retweets,
"hashtag_list": hashtag_list,
"retweet_list": retweet_list,
"at_polarity": round(all_tweet_sentiment, 3),
"rt_polarity": round(retweet_sentiment, 3),
"emo_list": emo_list,
"pronoun_columns": printable_pronoun_usage_columns(all_tweet_pc, retweet_pc),
}
movement_summary = movement_template.safe_substitute(subs)
report_name = f"{movement}_summary.md"
report_path = path.join(report_dir, report_name)
with open(report_path, 'w', encoding="utf-8") as f:
f.write(movement_summary)
store_movement_reports("Counter", report_dir, counter_comm_ids, tm)
store_movement_reports("BLM", report_dir, blm_comm_ids, tm)
# serialize the graph
graph_file_name = "graph.pkl"
dir_stem = "D:/BLM-db/graphs/" + f"{period}/"
graph_file_path = path.join(dir_stem, graph_file_name)
tm.urg.g.write_pickle(graph_file_path, version = -1)
stance_and_previous_activity_template = Template("""
## Analysis by Previous Activity for Stance $stance
Number of previously active accounts: $num_experienced
Number of first-time accounts: $num_noob
### Activity
| Activity | No Previous Activity | Previously Active |
|------:|:------:|:-------|
| Avg Tweets | $noob_tweets | $experienced_tweets |
| Avg Retweets | $noob_retweets | $experienced_retweets |
| Avg Replies | $noob_replies | $experienced_replies |
### Sentiment Analysis
| Measure | No Previous Activity | Previously Active |
|------:|:------:|:-------|
| Avg Sentiment | $noob_sentiment | $experienced_sentiment |
$emo_list
### Pronoun Usage
| Pronoun | No Previous Activity | Previously Active |
|------:|:------:|:-------|
$pronoun_columns
### Top Memes
#### No Previous Activity
| Count | Hashtag |
|------:|:------|
$noob_hashtag_list
#### Previously Active
| Count | Hashtag |
|------:|:------|
$experienced_hashtag_list
""")
global_experienced_accounts = db.get_account_list(end_period = period - 1)
global_experienced_accounts = set(global_experienced_accounts)
def activity_and_sentiment_for_accounts(accounts: List[str], tm: TweetsManager):
num_accounts = len(accounts)
num_tweets, num_retweets, num_replies = 0, 0, 0
sentiment = 0.0
pronoun_counts = PronounCounts()
emo_scores = EmoScores()
meme_counter = Counter()
if num_accounts > 0:
for account_id in accounts:
ua = tm.user_activity[account_id]
num_tweets += ua.tweet_count
num_retweets += ua.retweet_count
num_replies += ua.reply_count
sentiment += ua.sentiment_summary.sentiment
pronoun_counts += ua.sentiment_summary.pronoun_counts
emo_scores += ua.sentiment_summary.emo_scores
for meme, count in ua.meme_counter.items():
meme_counter[meme] += count
num_tweets /= num_accounts
num_retweets /= num_accounts
num_replies /= num_accounts
sentiment /= num_accounts
pronoun_counts /= num_accounts
emo_scores /= num_accounts
sentiment_analysis = SentimentAnalysis(pronoun_counts, emo_scores, sentiment)
return num_accounts, num_tweets, num_retweets, num_replies, sentiment_analysis, meme_counter
def publish_experience_analysis(stance: str, community_ids: List[int], tm: TweetsManager):
if len(community_ids) == 0:
return
experienced_accounts = []
noob_accounts = []
for community_id in community_ids:
for user_id in tm.community_user_map[community_id]:
if user_id in global_experienced_accounts:
experienced_accounts.append(user_id)
else:
noob_accounts.append(user_id)
num_noob, noob_tweets, noob_retweets, noob_replies, noob_sa, noob_memes = \
activity_and_sentiment_for_accounts(noob_accounts, tm)
num_exp, exp_tweets, exp_retweets, exp_replies, exp_sa, exp_memes = \
activity_and_sentiment_for_accounts(experienced_accounts, tm)
subs = {
"stance": stance,
"num_noob": num_noob,
"num_experienced": num_exp,
"noob_tweets": round(noob_tweets, 3),
"experienced_tweets": round(exp_tweets, 3),
"noob_retweets": round(noob_retweets, 3),
"experienced_retweets": round(exp_retweets, 3),
"noob_replies": round(noob_replies, 3),
"experienced_replies": round(exp_replies, 3),
"noob_sentiment": round(noob_sa.sentiment, 3),
"experienced_sentiment": round(exp_sa.sentiment, 3),
"emo_list": printable_emo_scores_columns(left=noob_sa.emo_scores, right=exp_sa.emo_scores),
"pronoun_columns": printable_pronoun_usage_columns(left=noob_sa.pronoun_counts, right=exp_sa.pronoun_counts),
"noob_hashtag_list": printable_top_hashtag_list(noob_memes),
"experienced_hashtag_list": printable_top_hashtag_list(exp_memes),
}
experience_summary = stance_and_previous_activity_template.safe_substitute(subs)
report_name = f"{stance}_experience_analysis.md"
report_path = path.join(report_dir, report_name)
with open(report_path, 'w', encoding="utf-8") as f:
f.write(experience_summary)
publish_experience_analysis("BLM", blm_comm_ids, tm)
publish_experience_analysis("CounterProtest", counter_comm_ids, tm)
# overview report
overview_template = Template('''
## OVERVIEW of PERIOD $start_date to $end_date
| What | How Many |
|:-------|--------:|
| Tweets | $num_tweets |
| Retweets | $num_retweets |
| Communities | $num_communities |
| Accounts | $num_accounts |
| Size of largest community | $largest_comm_size |
''')
total_tweets = sum(ua.tweet_count for ua in tm.user_activity.values())
total_retweets = sum(ua.retweet_count for ua in tm.user_activity.values())
num_communities = len(tm.community_user_map)
num_accounts = len(tm.user_community_map)
largest_comm_size = len(comm_user_counts[0][1])
subs = {
'start_date': start_date,
'end_date': end_date,
'num_tweets': total_tweets,
'num_retweets': total_retweets,
'num_communities': num_communities,
'num_accounts': num_accounts,
'largest_comm_size': largest_comm_size,
}
overview_report_name = "OverviewReport.md"
overview_path = path.join(report_dir, overview_report_name)
overview = overview_template.safe_substitute(subs)
with open(overview_path, 'w', encoding="utf-8") as f:
f.write(overview)
###Output
_____no_output_____ |
tutorial/700_qgate_en.ipynb | ###Markdown
Quantum ComputingBy using a large quantum computing simulation, you may get efficient development environment before going to the actual quantum computer. This time we introduce the way to install "Qgate" a NVIDIA CUDA based quantum computing simulator and "Blueqat" a python library/SDK.Let's start installing these tools. If you want to use the power of GPU, you need to turn on the "GPU" mode from the configuration first. IntallInstall is quite easy, just run the code below.
###Code
!wget https://github.com/shinmorino/qgate/raw/gh-pages/packages/0.2/qgate-0.2.1-cp36-cp36m-manylinux1_x86_64.whl
!pip install qgate-0.2.1-cp36-cp36m-manylinux1_x86_64.whl blueqat
###Output
_____no_output_____
###Markdown
ExampleAfter click and run the code we successfully get blueqat and qgate together. That's all. Now we check the examples.You may alos just click and run, and now we have simulation of 25 qubits.
###Code
from blueqat import Circuit
Circuit(25).h[:].m[:].run(backend='qgate',shots=100)
###Output
_____no_output_____
###Markdown
You may get lot of samples of the state vector. GPU modeAnd now we use the power of GPU. Turn the GPU mode on and just run the code.
###Code
Circuit(30).h[:].m[:].run(backend='qgate', runtime='cuda', shots=100)
###Output
_____no_output_____ |
english-language-arts-and-data-science/shakespeare-workshop-notebook.ipynb | ###Markdown
 Shakespeare and Statistics*Image from https://en.wikipedia.org/wiki/Droeshout_portrait*Can art and science be combined? Natural language processing allows us to use the same statistical skills you might learn in a science class, such as counting up members of a population and looking at their distribution, to gain insight into the written word. Here's an example of what you can do. Let's consider the following question: What are the top 20 phrases in Shakespeare's Macbeth?Normally when we study Shakespeare we critically read his plays and study the plot, characters, and themes. While this is definitely interesting and useful, we can gain very different insights by taking a multidisciplinary approach.This is something we would probably never want to do if we had to do it by hand. Imagine getting out your clipboard, writing down every different word or phrase you come across and then counting how many times that same word or phrase reappears. Check out how quickly it can be done using Python code in this Jupyter notebook. Loading the textThere are many public domain works available at [Project Gutenberg](http://www.gutenberg.org). You can [search](http://www.gutenberg.org/ebooks) or [browse](http://www.gutenberg.org/catalog) to find works that you are interested in analysing.We are going to search for the play *Macbeth* by William Shakespeare. On the **Download This eBook** page we'll copy the `Plain Text UTF-8` link, then use the `Requests` Python library to download it into a variable called `macbeth`. We can then refer to it by using `macbeth` at any point from here on
###Code
text_link = 'http://www.gutenberg.org/files/1533/1533-0.txt'
import requests
r = requests.get(text_link) # get the online book file
r.encoding = 'utf-8' # specify the type of text encoding in the file
macbeth = r.text.split('***')[2] # get the part after the header
macbeth = macbeth.replace("’","'").replace("“",'"').replace("”",'"') # replace any 'smart quotes'
###Output
_____no_output_____
###Markdown
For example, we can just print it out to see that we've grabbed the correct document.
###Code
print(macbeth)
###Output
_____no_output_____
###Markdown
Looks good! But that's a lot of reading to do. And a lot of phrases to count and keep track of. Here's where some Python libraries come into play. Crunching the text`noun_phrases` will grab groups of words that have been identified as phrases containing nouns. This isn't always 100% correct. English can be a challenging language even for machines, and sometimes the files on [Project Gutenberg](http://www.gutenberg.org) contain errors that make it even harder, but it can usually do a pretty good job.This code cell installs two Python libraries for natural language processing, [textblob](https://textblob.readthedocs.io/en/dev) and [nltk](https://www.nltk.org), then downloads a [corpora data file](http://www.nltk.org/nltk_data) that will allow us to process the text.This may take a while to run. On the left you will see `In [*]:` while it is running. Once it finishes you should see the output printed on the screen.
###Code
import nltk
try:
nltk.data.find('tokenizers/punkt')
except LookupError:
nltk.download('punkt')
try:
nltk.data.find('tokenizers/brown')
except LookupError:
nltk.download('brown')
from textblob import TextBlob
macbeth_phrases = TextBlob(macbeth).noun_phrases
print(macbeth_phrases)
###Output
_____no_output_____
###Markdown
What you're seeing is no longer raw text. It's now a list of strings. How long is the list? Let's find out. `len` is short for "length", and it will tell you how many items are in any list.
###Code
len(macbeth_phrases)
###Output
_____no_output_____
###Markdown
Looks like we have over 3000 noun phrases. We don't yet know how many of them are repeated. Counting everything upHere's where this starts to look like a real science project! Let's count the unique phrases and create a table of how many times they occur. They'll be sorted from most to least frequent.
###Code
import pandas as pd
unique_texts = list(set(macbeth_phrases))
text_counts = {text: macbeth_phrases.count(text) for text in unique_texts}
sorted_texts = sorted(text_counts.items(), key=lambda x: x[1], reverse=True)
macbeth_counts = pd.DataFrame(data=sorted_texts, columns=['text', 'count'])
macbeth_counts
###Output
_____no_output_____
###Markdown
There are a lot of them, so we'll use `.head(20)` which means show the top twenty. In these lists, the first item is always number 0.
###Code
macbeth_counts.head(20)
###Output
_____no_output_____
###Markdown
There we have it! The top 20 phrases in Macbeth! Let's put those in a plot. Plotting the resultsYou can do almost any kind of plot or other visual representation of observations like this you could think of in Callysto. We'll use the `Plotly Express` library to produce a bar chart, ordered from most to least frequent word.
###Code
import plotly.express as px
macbeth_counts_sorted = macbeth_counts.head(20).sort_values(by='count', ascending=False)
px.bar(macbeth_counts_sorted, x='text', y='count', title='Phrase Frequencies in MacBeth')
###Output
_____no_output_____
###Markdown
Or if you would prefer a horizontal bar chart.
###Code
macbeth_counts_sorted = macbeth_counts.head(20).sort_values(by='count', ascending=True)
px.bar(macbeth_counts_sorted, y='text', x='count', orientation='h', title='Phrase Frequencies in MacBeth')
###Output
_____no_output_____
###Markdown
Surprise, surprise. *Macbeth* is the top phrase in Macbeth. Our main character is mentioned more than twice the number of times as the next most frequent phrase, *Macduff*, and more than three times the frequency that *Lady Macbeth* is mentioned. Thinking about the resultsOne of the first things we might realize from this simple investigation is the importance of proper nouns. Phrases containing the main characters occur far more frequently than other phrases, and the main character of the play is mentioned far more times than any other characters.Are these observations particular to Macbeth? Or to Shakespeare's plays? Or are they more universal?Now that we've gone through Macbeth, how hard could it be to look at other texts?Let's define a function to to download an ebook from a text url, pull out all the noun phrases, count them up, and plot them. We can then use this for any ebook text that we would like to visualize.
###Code
def word_frequency_plot(text_url):
r = requests.get(text_url)
r.encoding = 'utf-8'
if 'gutenberg' in text_url:
text = r.text.split('***')[2]
else:
text = r.text
text = text.replace("’","'").replace("“",'"').replace("”",'"')
phrases = TextBlob(text).noun_phrases
unique_texts = list(set(phrases))
text_counts = {text: phrases.count(text) for text in unique_texts}
sorted_texts = sorted(text_counts.items(), key=lambda x: x[1], reverse=True)
counts = pd.DataFrame(data=sorted_texts, columns=['text', 'count'])
global counts_sorted
counts_sorted = counts.head(20).sort_values(by='count', ascending=True)
px.bar(counts_sorted, y='text', x='count', orientation='h', title='Phrase Frequencies').show()
print('Word frequency plot function defined.')
###Output
_____no_output_____
###Markdown
Looking at Hamlet**Hamlet** can be found on [Project Gutenberg](http://www.gutenberg.org) under [EBook-No. 1524](http://www.gutenberg.org/ebooks/1524).Run the following code to download **Hamlet**, pull out all the noun phrases, count them up, and plot them out.
###Code
word_frequency_plot('http://www.gutenberg.org/files/1524/1524-0.txt')
###Output
_____no_output_____
###Markdown
Another way to visualizeBar plots are an excellent way to look at the relative differences in frequencies of the different words, but they're not the only way.Another common way to show the frequencies of words in a text is through a Word Cloud, where the size of each word (or font size) is proportional to the frequency of that word. Similar to a bar plot, the larger words are more frequent in the text used to create them.The below code defines a function that creates a word cloud using the same text as the URL that was given to the bar plot function.
###Code
from wordcloud import WordCloud
import matplotlib.pyplot as plt
def word_cloud_plot(df):
wc = WordCloud(background_color='white')
df_reindex = df.set_index('text')
freqs = df_reindex['count'].to_dict()
cloud = wc.generate_from_frequencies(frequencies=freqs)
plt.figure(figsize=(12,6))
fig = plt.imshow(cloud)
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
fig.axes.spines[['top', 'bottom', 'left', 'right']].set_visible(False)
plt.show()
print('Word cloud function defined.')
###Output
_____no_output_____
###Markdown
And then we can run the function to generate the word cloud
###Code
word_cloud_plot(counts_sorted)
###Output
_____no_output_____
###Markdown
Looking at your own text Now we can take a look at a text of your choice. From [Project Gutenberg](https://www.gutenberg.org), choose a text and click on a plain text file of the text. Copy and paste the link to replace the `'https://www.gutenberg.org/cache/epub/67098/pg67098.txt'` in the code cell below.
###Code
myText = 'https://www.gutenberg.org/cache/epub/67098/pg67098.txt' # Choose your own text to generate a phrase frequencies graph
word_frequency_plot(myText)
###Output
_____no_output_____
###Markdown
Well done! Now let's count how many *noun phrases* your text has, just like we did for Macbeth.
###Code
r2 = requests.get(myText)
r2.encoding = 'utf-8'
textfile = r2.text.split('***')[2]
textfile = textfile.replace("’","'").replace("“",'"').replace("”",'"')
try:
nltk.data.find('tokenizers/punkt')
except LookupError:
nltk.download('punkt')
try:
nltk.data.find('tokenizers/brown')
except LookupError:
nltk.download('brown')
myText_phrases = TextBlob(textfile).noun_phrases
print(myText_phrases)
len(myText_phrases)
word_cloud_plot(counts_sorted)
###Output
_____no_output_____ |
profiling/Denoise algorithm.ipynb | ###Markdown
Denoise algorithm This notebook defines the denoise algorithm (step C defined in Towsey 2013) and compares the speed of different implementations. This is a step in processing recordings of the natural environment that "better preserves the structural integrity of complex acoustic events (e.g. bird calls) but removes noise from background locations further removed from that event (Towsey 2013)."[Towsey, Michael W. (2013)](http://eprints.qut.edu.au/61399/) Noise removal from wave-forms and spectrograms derived from natural recordings of the environment. Required packages [numba](https://github.com/numba/numba) [scipy](https://github.com/scipy/scipy) [numpy](https://github.com/numpy/numpy) [matplotlib](https://github.com/matplotlib/matplotlib) [pyprind](https://github.com/rasbt/pyprind) Import statements
###Code
import numpy as np
from scipy.ndimage import generic_filter
from numba import jit, guvectorize, float64
import pyprind
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
python implementation using pure python
###Code
def denoise(a, b):
for channel in range(2):
for f_band in range(4, a.shape[1] - 4):
for t_step in range(1, a.shape[2] - 1):
neighborhood = a[channel, f_band - 4:f_band + 5, t_step - 1:t_step + 2]
if neighborhood.mean() < 10:
b[channel, f_band, t_step] = neighborhood.min()
else:
b[channel, f_band, t_step] = neighborhood[4, 1]
return b
###Output
_____no_output_____
###Markdown
scipy implementation using scipy.ndimage.generic_filter—the custom callback function is just-in-time compiled by numba
###Code
@jit(nopython=True)
def filter_denoise(neighborhood):
if neighborhood.mean() < 10:
return neighborhood.min()
else:
return neighborhood[13]
def denoise_scipy(a, b):
for channel in range(2):
b[channel] = generic_filter(input=a[channel], function=filter_denoise,
size=(9, 3), mode='constant')
return b
###Output
_____no_output_____
###Markdown
numba implementation of a universal function via numba.guvectorize
###Code
# just removed return statement
def denoise_guvectorize(a, b):
for channel in range(2):
for f_band in range(4, a.shape[1] - 4):
for t_step in range(1, a.shape[2] - 1):
neighborhood = a[channel, f_band - 4:f_band + 5, t_step - 1:t_step + 2]
if neighborhood.mean() < 10:
b[channel, f_band, t_step] = neighborhood.min()
else:
b[channel, f_band, t_step] = neighborhood[4, 1]
###Output
_____no_output_____
###Markdown
serial version
###Code
denoise_numba = guvectorize('float64[:,:,:], float64[:,:,:]', '(c,f,t)->(c,f,t)',
nopython=True)(denoise_guvectorize)
###Output
_____no_output_____
###Markdown
parallel version
###Code
denoise_parallel = guvectorize('float64[:,:,:], float64[:,:,:]', '(c,f,t)->(c,f,t)',
nopython=True, target='parallel')(denoise_guvectorize)
###Output
_____no_output_____
###Markdown
check results test the implementations on a randomly generated dataset and verfiy that all the results are the same
###Code
size = 100
data = np.random.rand(2, size, int(size*1.5))
data[:, int(size/4):int(size/2), int(size/4):int(size/2)] = 27
result_python = denoise(data, np.zeros_like(data))
result_scipy = denoise_scipy(data, np.zeros_like(data))
result_numba = denoise_numba(data, np.zeros_like(data))
result_parallel = denoise_parallel(data, np.zeros_like(data))
###Output
_____no_output_____
###Markdown
check if the different implementations produce the same result
###Code
assert np.allclose(result_python, result_scipy) and np.allclose(result_python, result_numba)
###Output
_____no_output_____
###Markdown
plot results
###Code
fig, ax = plt.subplots(2,2)
fig.set_figheight(8)
fig.set_figwidth(12)
im1 = ax[0, 0].imshow(data[0], cmap='viridis', interpolation='none', vmax=1)
t1 = ax[0, 0].set_title('data')
im2 = ax[0, 1].imshow(result_python[0], cmap='viridis', interpolation='none', vmax=1)
t1 = ax[0, 1].set_title('pure python')
im3 = ax[1, 0].imshow(result_scipy[0], cmap='viridis', interpolation='none', vmax=1)
t1 = ax[1, 0].set_title('scipy')
im4 = ax[1, 1].imshow(result_numba[0], cmap='viridis', interpolation='none', vmax=1)
t1 = ax[1, 1].set_title('numba')
###Output
_____no_output_____
###Markdown
profile for different data sizes time the different implementations on different dataset sizes
###Code
sizes = [30, 50, 100, 200, 400, 800, 1600]
progress_bar = pyprind.ProgBar(iterations=len(sizes), track_time=True, stream=1, monitor=True)
t_python = np.empty_like(sizes, dtype=np.float64)
t_scipy = np.empty_like(sizes, dtype=np.float64)
t_numba = np.empty_like(sizes, dtype=np.float64)
t_parallel = np.empty_like(sizes, dtype=np.float64)
for size in range(len(sizes)):
progress_bar.update(item_id=sizes[size])
data = np.random.rand(2, sizes[size], sizes[size])*0.75
t_1 = %timeit -oq denoise(data, np.zeros_like(data))
t_2 = %timeit -oq denoise_scipy(data, np.zeros_like(data))
t_3 = %timeit -oq denoise_numba(data, np.zeros_like(data))
t_4 = %timeit -oq denoise_parallel(data, np.zeros_like(data))
t_python[size] = t_1.best
t_scipy[size] = t_2.best
t_numba[size] = t_3.best
t_parallel[size] = t_4.best
###Output
0% 100%
[#######] | ETA: 00:00:00 | Item ID: 1600
Total time elapsed: 00:02:30
###Markdown
plot profile results
###Code
fig, ax = plt.subplots(figsize=(15,5))
p1 = ax.loglog(sizes, t_python, color='black', marker='.', label='python')
p2 = ax.loglog(sizes, t_scipy, color='blue', marker='.', label='scipy')
p3 = ax.loglog(sizes, t_numba, color='green', marker='.', label='numba')
p4 = ax.loglog(sizes, t_parallel, color='red', marker='.', label='parallel')
lx = ax.set_xlabel("data array size (2 x n x n elements)")
ly = ax.set_ylabel("time (seconds)")
t1 = ax.set_title("running times of the 'denoise' algorithm")
ax.grid(True, which='major')
l = ax.legend()
###Output
_____no_output_____ |
1-LaneLines/others/P1.ipynb | ###Markdown
Self-Driving Car Engineer Nanodegree Project: **Finding Lane Lines on the Road** ***In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/322/view) for this project.---Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**--- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**--- Your output should look something like this (above) after detecting line segments using the helper functions below Your goal is to connect/average/extrapolate line segments to get output like this **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** Import Packages
###Code
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
###Output
_____no_output_____
###Markdown
Read in an Image
###Code
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
###Output
This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)
###Markdown
Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**`cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images `cv2.cvtColor()` to grayscale or change color `cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson!
###Code
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img, lines
def remove_region(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
###Output
_____no_output_____
###Markdown
Test ImagesBuild your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.**
###Code
import os
os.listdir("test_images/")
###Output
_____no_output_____
###Markdown
Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
###Code
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
grey = grayscale(image)
# blur
kernel = 3
img_blur = gaussian_blur(grey, kernel)
low_thre = 1
high_thre = 25
edges = canny(img_blur, low_thre, high_thre)
w = image.shape[1]
h= image.shape[0]
#vertices = np.array([[(0.15*w, 0.4*h), (0.35*w, 0.4*h), (0.5*w, 0.3*h), (0.9*w, h)]], dtype=np.int32)
vertices = np.array([[(100,h),(490, 285), (900,h)]], dtype=np.int32) # 4 points to create the boundry
masked_img = region_of_interest(edges, vertices)
rho = 1
theta = np.pi/180
thre = 100
min_len = 100
max_gap = 50
line_img, lines = hough_lines(masked_img, rho, theta, thre, min_len, max_gap) # lines.shape= (4, 1, 4)
vert = np.array([[(200,520), (485,350), (750,520)]], dtype=np.int32) # 3 points to create the boundry
masked_img2 = remove_region(line_img, vert)
plt.imshow(masked_img2)
img = weighted_img(masked_img2, image)
#img = weighted_img(line_img, image)
#plt.imshow(img)
print(lines)
imag_ = cv2.circle(line_img, (200,520), radius=5, color=(0, 0, 255), thickness=-1)
imag_ = cv2.circle(line_img, (750,520), radius=5, color=(0, 255, 0), thickness=-1)
imag_ = cv2.circle(line_img, (485,350), radius=5, color=(0, 255, 0), thickness=-1)
(200,520), (485,350), (750,520)
plt.imshow(imag_)
###Output
_____no_output_____
###Markdown
Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
###Code
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
return result
###Output
_____no_output_____
###Markdown
Let's try the one with the solid white lane on the right first ...
###Code
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
###Output
_____no_output_____
###Markdown
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
###Code
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
###Output
_____no_output_____
###Markdown
Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky!
###Code
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
###Output
_____no_output_____
###Markdown
Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
###Code
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
###Output
_____no_output_____ |
content/ch-algorithms/quantum-phase-estimation.ipynb | ###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the auxiliary register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the auxiliary register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer, transpile, assemble
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cp(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(qc, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-math.pi/float(2**(j-m)), m, j)
qc.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 2048
t_qpe = transpile(qpe, qasm_sim)
qobj = assemble(t_qpe, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cp(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 4096
t_qpe2 = transpile(qpe2, qasm_sim)
qobj = assemble(t_qpe2, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cp(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
# Let's see the results!
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 4096
t_qpe3 = transpile(qpe3, qasm_sim)
qobj = assemble(t_qpe3, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
IBMQ.load_account()
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
santiago = provider.get_backend('ibmq_santiago')
# Run with 2048 shots
shots = 2048
t_qpe = transpile(qpe, santiago, optimization_level=3)
qobj = assemble(t_qpe, shots=shots)
job = santiago.run(qobj)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, Controlled-$S$, Controlled-$T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a Controlled-$Y$-gate, do you get the result you expected? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Basis As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:0. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ 1. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$2. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. 3. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ 4. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register. At the moment our qubits are in reverse order (a common problem in quantum computing!) We measure to the classical bits in reverse order to fix this:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw(output="mpl")
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
for n in range(5):
qpe3.measure(n,n)
qpe3.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw(output='mpl')
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q-internal')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, $S$, $T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a $Y$-gate, do you get the correct result? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 4.2 [Phase Estimation of a CNOT](qpe_cnot) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Basis As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:0. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ 1. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$2. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. 3. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ 4. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations:
###Code
repetitions = 2**2
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions //= 2
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(int(n/2)):
circ.swap(qubit, n-qubit-1)
for j in range(n,0,-1):
k = n - j
for m in range(k):
circ.cu1(-math.pi/float(2**(k-m)), n-m-1, n-k-1)
circ.h(n-k-1)
###Output
_____no_output_____
###Markdown
We then measure the counting register. At the moment our qubits are in reverse order (a common problem in quantum computing!) We measure to the classical bits in reverse order to fix this:
###Code
# Apply inverse QFT
qft_dagger(qpe, 3)
# We measure in reverse order to correct issues later
qpe.measure(0,2)
qpe.measure(1,1)
qpe.measure(2,0)
qpe.draw(output="mpl")
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 2**2
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions //= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
qpe2.measure(0,2)
qpe2.measure(1,1)
qpe2.measure(2,0)
qpe2.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010 = 2` and `011 = 3`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 2**4
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions //= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.measure(0,4)
qpe3.measure(1,3)
qpe3.measure(2,2)
qpe3.measure(3,1)
qpe3.measure(4,0)
qpe3.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw(output='mpl')
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run with 3072 shots
shots = 4096
job_exp = execute(qpe, backend=backend, shots=shots)
job_monitor(job_exp)
# get the results from the computation
results = job_exp.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `011` which is the result we would expect from the simulator. More likely, the results above are completely random. This is due to the many difficulties in building and running a real quantum computer. Some of the errors will occur from creating the controlled-$T$-gates, so let's try using a CNOT for our controlled-$U$ instead: 4.2 Phase Estimation of a CNOT
###Code
# Create and set up circuit
qpe4 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe4.h(qubit)
# Prepare our eigenstate |psi>:
qpe4.x(3)
qpe4.h(3)
# Do the controlled-U operations:
angle = math.pi
repetitions = 2**2
for counting_qubit in range(3):
for i in range(repetitions):
qpe4.cx(counting_qubit, 3);
repetitions //= 2
# Do the inverse QFT:
qft_dagger(qpe4, 3)
# Measure of course!
qpe4.measure(0,2)
qpe4.measure(1,1)
qpe4.measure(2,0)
qpe4.draw(output='mpl')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run with 2048 shots
shots = 2048
job_exp = execute(qpe4, backend=backend, shots=shots)
job_monitor(job_exp)
# get the results from the computation
results = job_exp.result()
answer = results.get_counts(qpe4)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
You can _hopefully_ see we are most likely to measure `100`, the expected result of running QPE on a CNOT-gate. The results are still erratic but they are useful to illustrate the capabilities of current quantum computers. 5. Exercises 1. Try the experiments above with different gates ($S$, $T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a $Y$-gate, do you get the correct result? (Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA.
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Basis As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:0. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ 1. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$2. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. 3. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ 4. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register. At the moment our qubits are in reverse order (a common problem in quantum computing!) We measure to the classical bits in reverse order to fix this:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw(output="mpl")
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
for n in range(5):
qpe3.measure(n,n)
qpe3.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw(output='mpl')
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, $S$, $T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a $Y$-gate, do you get the correct result? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer, transpile, assemble
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cp(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(qc, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-math.pi/float(2**(j-m)), m, j)
qc.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 2048
t_qpe = transpile(qpe, qasm_sim)
qobj = assemble(t_qpe, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cp(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 4096
t_qpe2 = transpile(qpe2, qasm_sim)
qobj = assemble(t_qpe2, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cp(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
# Let's see the results!
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 4096
t_qpe3 = transpile(qpe3, qasm_sim)
qobj = assemble(t_qpe3, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
IBMQ.load_account()
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
santiago = provider.get_backend('ibmq_santiago')
# Run with 2048 shots
shots = 2048
t_qpe = transpile(qpe, santiago, optimization_level=3)
qobj = assemble(t_qpe, shots=shots)
job = santiago.run(qobj)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, Controlled-$S$, Controlled-$T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a Controlled-$Y$-gate, do you get the result you expected? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operator with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the auxiliary register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the auxiliary register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer, transpile, assemble
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cp(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(qc, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-math.pi/float(2**(j-m)), m, j)
qc.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
aer_sim = Aer.get_backend('aer_simulator')
shots = 2048
t_qpe = transpile(qpe, aer_sim)
qobj = assemble(t_qpe, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cp(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
aer_sim = Aer.get_backend('aer_simulator')
shots = 4096
t_qpe2 = transpile(qpe2, aer_sim)
qobj = assemble(t_qpe2, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cp(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
# Let's see the results!
aer_sim = Aer.get_backend('aer_simulator')
shots = 4096
t_qpe3 = transpile(qpe3, aer_sim)
qobj = assemble(t_qpe3, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
IBMQ.load_account()
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
santiago = provider.get_backend('ibmq_santiago')
# Run with 2048 shots
shots = 2048
t_qpe = transpile(qpe, santiago, optimization_level=3)
job = santiago.run(t_qpe, shots=shots)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, Controlled-$S$, Controlled-$T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a Controlled-$Y$-gate, do you get the result you expected? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit.tools.jupyter
%qiskit_version_table
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
### Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = provider.get_backend('ibmq_vigo')
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, $S$, $T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a $Y$-gate, do you get the correct result? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cp(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(qc, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-math.pi/float(2**(j-m)), m, j)
qc.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
### Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
IBMQ.load_account()
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = provider.get_backend('ibmq_santiago')
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, Controlled-$S$, Controlled-$T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a Controlled-$Y$-gate, do you get the result you expected? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cp(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(qc, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-math.pi/float(2**(j-m)), m, j)
qc.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cp(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cp(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
### Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
IBMQ.load_account()
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = provider.get_backend('ibmq_santiago')
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, Controlled-$S$, Controlled-$T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a Controlled-$Y$-gate, do you get the result you expected? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
### Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = provider.get_backend('ibmq_vigo')
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, $S$, $T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a $Y$-gate, do you get the correct result? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ |\psi_0\rangle = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ |\psi_1\rangle = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $CU$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operator with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $CU^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}|\psi_{2}\rangle & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the auxiliary register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the auxiliary register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer, transpile, assemble
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cp(math.pi/4, counting_qubit, 3); # This is CU
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(qc, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-math.pi/float(2**(j-m)), m, j)
qc.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
aer_sim = Aer.get_backend('aer_simulator')
shots = 2048
t_qpe = transpile(qpe, aer_sim)
qobj = assemble(t_qpe, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cp(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
aer_sim = Aer.get_backend('aer_simulator')
shots = 4096
t_qpe2 = transpile(qpe2, aer_sim)
qobj = assemble(t_qpe2, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cp(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
# Let's see the results!
aer_sim = Aer.get_backend('aer_simulator')
shots = 4096
t_qpe3 = transpile(qpe3, aer_sim)
qobj = assemble(t_qpe3, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
IBMQ.load_account()
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
santiago = provider.get_backend('ibmq_santiago')
# Run with 2048 shots
shots = 2048
t_qpe = transpile(qpe, santiago, optimization_level=3)
job = santiago.run(t_qpe, shots=shots)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, Controlled-$S$, Controlled-$T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a Controlled-$Y$-gate, do you get the result you expected? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit.tools.jupyter
%qiskit_version_table
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Basis As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:0. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ 1. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$2. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. 3. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ 4. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations:
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register. At the moment our qubits are in reverse order (a common problem in quantum computing!) We measure to the classical bits in reverse order to fix this:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
%config InlineBackend.figure_format = 'png' # stops the images getting too big
qpe.draw(output="mpl")
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
%config InlineBackend.figure_format = 'svg' # Image formatting again
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
%config InlineBackend.figure_format = 'png' # stops the images getting too big
qpe2.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
%config InlineBackend.figure_format = 'svg'
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 2**4
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions //= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
for n in range(5):
qpe3.measure(n,n)
%config InlineBackend.figure_format = 'png'
qpe3.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
%config InlineBackend.figure_format = 'svg'
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
%config InlineBackend.figure_format = 'png'
qpe.draw(output='mpl')
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
%config InlineBackend.figure_format = 'svg'
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, $S$, $T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a $Y$-gate, do you get the correct result? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
量子位相推定 目次1. [概要](overview) 1.1 [直感的解釈](intuition) 1.2 [基礎となる数学](maths)2. [例:Tゲート](example_t_gate) 2.1 [回路の作成](creating_the_circuit) 2.2 [結果](results) 3. [より精度を高める](getting_more_precision) 3.1 [問題](the_problem) 3.2 [解決策](the_solution) 4. [実デバイスでの実験](real_devices) 4.1 [2.1の回路で](circuit_2.1) 5. [練習問題](exercises) 6. [今後の展望](looking_forward)7. [参考文献](references)8. [寄稿者](contributors) 量子位相推定は、量子計算における最も重要なサブルーチンの1つです。 多くの量子アルゴリズムの中心的な構成要素として機能します。このアルゴリズムの目的は次のとおりです。ユニタリ演算子$U$が、 $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$のように与えられた場合に、$\theta$を推定します。 ここで、$|\psi\rangle$ は固有ベクトルであり、$e^{\boldsymbol{2\pi i}\theta}$は対応する固有値です。$U$ はユニタリーなので、すべての固有値のノルムは1です。 1. 概要 位相推定の一般的な量子回路を以下に示します。 上側のレジスターには$t$個の「カウント」量子ビットがあり、下のレジスターには$|\psi\rangle$の状態の量子ビットがあります: 1.1 直感的解釈 量子位相推定アルゴリズムは、位相キックバックを使用して、$U$の(フーリエ基底における)位相をカウントレジスターの$t$ 量子ビットに書き込みます。次に、逆QFTを使用して、フーリエ基底から計算基底に変換し、測定します。(QFTの章から)フーリエ基底では、 $0$から数えて$2^t$回で最上位の量子ビットが1回転したことを覚えているでしょう。 $0$から$2^t$の間の数である$x$を数えるには、この量子ビットをz軸を中心に$\tfrac{x}{2^t}$ 回転させます。 次の量子ビットでは$\tfrac{2x}{2^t}$回転し、3番目の量子ビットでは$\tfrac{4x}{2^t}$回転します。制御$U$ゲートを使うと、量子ビットは(キックバックにより)位相 $e^{2i\pi\theta}$に比例して回転します。 連続して$CU$ゲートを使用して、フーリエ基底で$0$ 〜 $2^t$の数値として位相$\theta$をエンコードできるところまで、この回転を適切な回数繰り返します。次に、$QFT^\dagger$を使用してこれを計算基底に変換します。 1.2 基礎となる数学 上記のように、この回路はユニタリー演算子$U$の位相を推定します。$U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$において、$|\psi\rangle$は固有ベクトル、$e^{\boldsymbol{2\pi i}\theta}$は対応する固有値で、$\theta$を推定します。回路は次の手順です。i. **準備**:$\vert\psi\rangle$を1量子ビットレジスターにセットします。 別の$n$個の量子ビットのセットは、値$2^n\theta$を格納するカウントレジスターです。$$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **重ね合わせ**:$n$ビットのアダマールゲート操作$H^{\otimes n}$をカウントレジスターに適用します。$$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **制御ユニタリー演算**:制御ビットが$|1\rangle$の場合にのみ、ターゲットレジスターにユニタリー演算子 $U$ を適用する、制御ユニタリー演算$C-U$を導入する必要があります。$U$は、$U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$となる固有ベクトル$|\psi\rangle$ のユニタリー演算子であるため、次のようになります。$$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$すべての𝑛制御演算子 $C − U^{2^j}$を$0\leq j\leq n-1$において適用し、$|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$を使用すると以下のようになります。\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}ここで、$k$ はnビットの2進数の整数表現です。iv. **逆フーリエ変換**:上の式は、[量子フーリエ変換とそのQiskit実装](qft.ipynb)のnotebookで導出したように、量子フーリエ変換を適用した結果であることに注意してください。 QFTがn量子ビットの入力状態$\vert x\rangle$を出力として以下のようにマップすることを思い出してください。$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$上の式で$x$を$2^n\theta$ に置き換えると、上記のステップ2で導出された式が正確に得られます。したがって、状態$\vert2^n\theta\rangle$を復元するには、補助レジスターに逆フーリエ変換を適用します。そうすることで、以下を得られます。$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **測定**:上記の式は$x = 2^n\theta$付近でピークになります。$2^n\theta$が整数の場合、計算基底で測定すると、高い確率で補助レジスターに位相が得られます。$$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$$2^n\theta$が整数でない場合、上記の式は $x = 2^n\theta$ の近くでピークに達し、その確率は$4/\pi^2 \approx 40\%$よりも高いものになります[1]。 2. 例:Tゲート みなさんがよく知っている$T$ゲートを例として取り上げ、量子位相推定を使用してその位相を推定します。$T$ゲートは状態$|1\rangle$に位相$e^\frac{i\pi}{4}$ を追加することを思い出してください。$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$以下の式で与えられる $\theta$ についてQPEを使って、$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$$\theta$を見つけることができます:$$\theta = \frac{1}{8}$$この例では、3量子ビットを使用して、 _正確な_ 結果(推定ではありません!)を得ます。 2.1 回路の作成 まず、環境を準備しましょう。
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
次に、量子回路を設定します。 4量子ビットを使用します。qubit 0〜2はカウント量子ビットとして、qubit 3はユニタリー演算子($T$)の固有状態として使用します。$X$ ゲートを適用して、$\vert\psi\rangle = \vert1\rangle$と初期化します。
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
カウント量子ビットにアダマールゲートをかけます:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
次に、制御ユニタリー演算を実行します。**注意**:Qiskitでは量子ビットが上の画像とは逆のむきに並びます。
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
逆量子フーリエ変換を適用して、カウントレジスターの状態を変換します。 ここでは、$QFT^\dagger$のコードを以下のように与えます。
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
###Output
_____no_output_____
###Markdown
次に、カウントレジスターを測定します。
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 結果
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
結果(`001`)のみが得られ、これは10進数に変換されると `1`となります。$\theta$の結果を得るには、結果(`1`)を$2^n$で割る必要があります。$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$これはまさに私たちが期待した結果です! 3. 例:より精度を高める 3.1 問題 $T$ゲートの代わりに、$\theta = \frac{1}{3}$のゲートを使用してみましょう。上の例のように回路を準備しました。
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
予測される結果は$\theta = 0.3333\dots$です。実行の結果、最も可能性の高い結果は`010(bin) = 2(dec)` と`011(bin) = 3(dec)`であることが見てわかります。 これらの2つの結果から、それぞれ$\theta = 0.25$ (off by 25%) と$\theta = 0.375$ (off by 13%)が得られます。$\theta$ の真の値は、カウントビットから取得できる値の間にあり、この回路は、不確実であり不正確であることがわかります。 3.2 解決策 より精度を上げるには、カウント量子ビットを追加するだけです。 さらに2つのカウント量子ビットを追加しましょう。
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
### Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
最も可能性の高い2つの測定結果は、`01011`(10進数の11)と`01010` (10進数の10)です。 これらから、次のように$\theta$が求められます。$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$この2つの結果の$\frac{1}{3}$ との誤差は、それぞれ3%と6%です。さきほどよりずっと優れた精度の結果が得られました! 4. 実デバイスでの実験 4.1 2.1の回路で 2.1節の回路は実際のデバイスで実行できます。回路を思い出してみましょう。
###Code
qpe.draw()
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = provider.get_backend('ibmq_vigo')
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
うまくいけば、最も可能性の高い結果は、シミュレーターから期待される結果である `001` になることがわかります。 シミュレーターとは異なり、 `001`以外も測定される可能性があります。これは、量子コンピューターにおけるノイズとゲートエラーによるものです。 5. 練習問題 1. 異なるゲート($\text{CNOT}$, $S$, $T^\dagger$)で上記の実験を試してください。どのような結果が期待できますか? どのような結果が得られますか?2. $Y$ゲートで実験してみてください。正しい結果が得られますか? (ヒント:$|\psi\rangle$ が$Y$の固有状態であることを確認してください!) 6. 今後の展望 制御$U$演算を実行するには $\theta$ を知っている必要があったため、量子位相推定アルゴリズムは無意味に見えるかもしれません。後の章で、$\theta$ が不明な状態で回路を作る方法を学び、この$\theta$ について学習することで、非常に有用な情報が得られることが分かります(最も有名なのは、数を因数分解する方法です!)。 7. 参考文献 [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. 寄稿者 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the auxiliary register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the auxiliary register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer, transpile, assemble
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cp(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(qc, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-math.pi/float(2**(j-m)), m, j)
qc.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 2048
t_qpe = transpile(qpe, qasm_sim)
qobj = assemble(t_qpe, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cp(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 4096
t_qpe2 = transpile(qpe2, qasm_sim)
qobj = assemble(t_qpe2, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cp(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
# Let's see the results!
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 4096
t_qpe3 = transpile(qpe3, qasm_sim)
qobj = assemble(t_qpe3, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
IBMQ.load_account()
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
santiago = provider.get_backend('ibmq_santiago')
# Run with 2048 shots
shots = 2048
t_qpe = transpile(qpe, santiago, optimization_level=3)
qobj = assemble(t_qpe, shots=shots)
job = santiago.run(qobj)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, Controlled-$S$, Controlled-$T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a Controlled-$Y$-gate, do you get the result you expected? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Basis As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:0. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ 1. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$2. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. 3. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ 4. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations:
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(0,n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register. At the moment our qubits are in reverse order (a common problem in quantum computing!) We measure to the classical bits in reverse order to fix this:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
%config InlineBackend.figure_format = 'png' # stops the images getting too big
qpe.draw(output="mpl")
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
%config InlineBackend.figure_format = 'svg' # Image formatting again
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
%config InlineBackend.figure_format = 'png' # stops the images getting too big
qpe2.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
%config InlineBackend.figure_format = 'svg'
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 2**4
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions //= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
for n in range(5):
qpe3.measure(n,n)
%config InlineBackend.figure_format = 'png'
qpe3.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
%config InlineBackend.figure_format = 'svg'
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
%config InlineBackend.figure_format = 'png'
qpe.draw(output='mpl')
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
%config InlineBackend.figure_format = 'svg'
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, $S$, $T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a $Y$-gate, do you get the correct result? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Basis As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register. At the moment our qubits are in reverse order (a common problem in quantum computing!) We measure to the classical bits in reverse order to fix this:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw(output="mpl")
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
for n in range(5):
qpe3.measure(n,n)
qpe3.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw(output='mpl')
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, $S$, $T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a $Y$-gate, do you get the correct result? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Basis As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:0. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ 1. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$2. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. 3. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ 4. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register. At the moment our qubits are in reverse order (a common problem in quantum computing!) We measure to the classical bits in reverse order to fix this:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw(output="mpl")
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
for n in range(5):
qpe3.measure(n,n)
qpe3.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw(output='mpl')
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q-internal')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, $S$, $T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a $Y$-gate, do you get the correct result? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ |\psi_0\rangle = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ |\psi_1\rangle = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $CU$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operator with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $CU^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}|\psi_{2}\rangle & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the auxiliary register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the auxiliary register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer, transpile, assemble
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cp(math.pi/4, counting_qubit, 3); # This is CU
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(qc, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-math.pi/float(2**(j-m)), m, j)
qc.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
aer_sim = Aer.get_backend('aer_simulator')
shots = 2048
t_qpe = transpile(qpe, aer_sim)
qobj = assemble(t_qpe, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cp(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
aer_sim = Aer.get_backend('aer_simulator')
shots = 4096
t_qpe2 = transpile(qpe2, aer_sim)
qobj = assemble(t_qpe2, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cp(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
# Let's see the results!
aer_sim = Aer.get_backend('aer_simulator')
shots = 4096
t_qpe3 = transpile(qpe3, aer_sim)
qobj = assemble(t_qpe3, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
IBMQ.load_account()
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
santiago = provider.get_backend('ibmq_santiago')
# Run with 2048 shots
shots = 2048
t_qpe = transpile(qpe, santiago, optimization_level=3)
job = santiago.run(t_qpe, shots=shots)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, Controlled-$S$, Controlled-$T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a Controlled-$Y$-gate, do you get the result you expected? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit.tools.jupyter
%qiskit_version_table
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ |\psi_0\rangle = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ |\psi_1\rangle = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operator with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}|\psi_{2}\rangle & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the auxiliary register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the auxiliary register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer, transpile, assemble
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cp(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(qc, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-math.pi/float(2**(j-m)), m, j)
qc.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
aer_sim = Aer.get_backend('aer_simulator')
shots = 2048
t_qpe = transpile(qpe, aer_sim)
qobj = assemble(t_qpe, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cp(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
aer_sim = Aer.get_backend('aer_simulator')
shots = 4096
t_qpe2 = transpile(qpe2, aer_sim)
qobj = assemble(t_qpe2, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cp(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
# Let's see the results!
aer_sim = Aer.get_backend('aer_simulator')
shots = 4096
t_qpe3 = transpile(qpe3, aer_sim)
qobj = assemble(t_qpe3, shots=shots)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
IBMQ.load_account()
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
santiago = provider.get_backend('ibmq_santiago')
# Run with 2048 shots
shots = 2048
t_qpe = transpile(qpe, santiago, optimization_level=3)
job = santiago.run(t_qpe, shots=shots)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, Controlled-$S$, Controlled-$T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a Controlled-$Y$-gate, do you get the result you expected? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit.tools.jupyter
%qiskit_version_table
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Basis As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:0. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ 1. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$2. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. 3. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ 4. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register. At the moment our qubits are in reverse order (a common problem in quantum computing!) We measure to the classical bits in reverse order to fix this:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw(output="mpl")
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
for n in range(5):
qpe3.measure(n,n)
qpe3.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
%config InlineBackend.figure_format = 'svg'
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw(output='mpl')
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q-internal')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, $S$, $T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a $Y$-gate, do you get the correct result? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operator with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the auxiliary register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the auxiliary register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer, transpile, assemble
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cp(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(qc, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-math.pi/float(2**(j-m)), m, j)
qc.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 2048
t_qpe = transpile(qpe, qasm_sim)
qobj = assemble(t_qpe, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cp(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 4096
t_qpe2 = transpile(qpe2, qasm_sim)
qobj = assemble(t_qpe2, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cp(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
# Let's see the results!
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 4096
t_qpe3 = transpile(qpe3, qasm_sim)
qobj = assemble(t_qpe3, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
IBMQ.load_account()
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
santiago = provider.get_backend('ibmq_santiago')
# Run with 2048 shots
shots = 2048
t_qpe = transpile(qpe, santiago, optimization_level=3)
qobj = assemble(t_qpe, shots=shots)
job = santiago.run(qobj)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, Controlled-$S$, Controlled-$T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a Controlled-$Y$-gate, do you get the result you expected? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Basis As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:0. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ 1. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$2. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. 3. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ 4. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register. At the moment our qubits are in reverse order (a common problem in quantum computing!) We measure to the classical bits in reverse order to fix this:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw(output="mpl")
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
for n in range(5):
qpe3.measure(n,n)
qpe3.draw(output='mpl')
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw(output='mpl')
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, $S$, $T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a $Y$-gate, do you get the correct result? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Contents1. [Overview](overview) 1.1 [Intuition](intuition) 1.2 [Mathematical Basis](maths)2. [Example: T-gate](example_t_gate) 2.1 [Creating the Circuit](creating_the_circuit) 2.2 [Results](results) 3. [Getting More Precision](getting_more_precision) 3.1 [The Problem](the_problem) 3.2 [The Solution](the_solution) 4. [Experimenting on Real Devices](real_devices) 4.1 [With the Circuit from 2.1](circuit_2.1) 5. [Exercises](exercises) 6. [Looking Forward](looking_forward)7. [References](references)8. [Contributors](contributors) Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. 1. Overview The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$: 1.1 Intuition The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis. Then we simply use $QFT^\dagger$ to convert this into the computational basis. 1.2 Mathematical Foundation As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](https://libket.ewi.tudelft.nl/textbook/ch-algorithms/quantum-fourier-transform.html). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the auxiliary register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ v. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the auxiliary register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. 2. Example: T-gate Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{4}\\ \end{bmatrix}\begin{bmatrix}0\\1\\ \end{bmatrix}= e^\frac{i\pi}{4}|1\rangle $$Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$We expect to find:$$\theta = \frac{1}{8}$$In this example we will use three qubits and obtain an _exact_ result (not an estimation!) 2.1 Creating the Circuit Let's first prepare our environment:
###Code
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer, transpile, assemble
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$). We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
###Code
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the counting qubits:
###Code
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
###Code
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cp(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
###Output
_____no_output_____
###Markdown
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
###Code
def qft_dagger(qc, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-math.pi/float(2**(j-m)), m, j)
qc.h(j)
###Output
_____no_output_____
###Markdown
We then measure the counting register:
###Code
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
###Output
_____no_output_____
###Markdown
2.2 Results
###Code
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 2048
t_qpe = transpile(qpe, qasm_sim)
qobj = assemble(t_qpe, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$This is exactly the result we expected! 3. Example: Getting More Precision 3.1 The Problem Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
###Code
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cp(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 4096
t_qpe2 = transpile(qpe2, qasm_sim)
qobj = assemble(t_qpe2, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision. 3.2 The Solution To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
###Code
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cp(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
# Let's see the results!
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 4096
t_qpe3 = transpile(qpe3, qasm_sim)
qobj = assemble(t_qpe3, shots=shots)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:$$\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313$$These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision! 4. Experiment with Real Devices 4.1 Circuit from 2.1 We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
###Code
qpe.draw()
IBMQ.load_account()
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
santiago = provider.get_backend('ibmq_santiago')
# Run with 2048 shots
shots = 2048
t_qpe = transpile(qpe, santiago, optimization_level=3)
qobj = assemble(t_qpe, shots=shots)
job = santiago.run(qobj)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer. 5. Exercises 1. Try the experiments above with different gates ($\text{CNOT}$, Controlled-$S$, Controlled-$T^\dagger$), what results do you expect? What results do you get?2. Try the experiment with a Controlled-$Y$-gate, do you get the result you expected? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!) 6. Looking Forward The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!) 7. References [1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. 8. Contributors 03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Phase Estimation Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1. Quantum Circuit for Phase Estimation The general quantum circuit for phase estimation is: As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:0. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form an ancilla register: $$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$ 1. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the ancilla register: $$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$2. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means: $$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned}\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle\end{aligned}where $k$ denotes the integer representation of n-bit binary numbers. 3. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) $$Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle$$ 4. **Measurement**: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability: $$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1]. Example Consider a 3-qubit quantum phase estimation of a one-qubit state $\vert\psi\rangle$.For this example, let us take the unitary matrix to be $Z$. Then, the input state $\vert1\rangle$ is an eigenvector with eigenvalue $-1 = \exp{\left(2\pi i \times 0.5 \right)}$. Hence, $\theta = 0.5$ and $2^n\theta = 2$ if we use $n = 2$ ancilla qubits. Note that in this case, the controlled unitary gates are$$U^{2^{n-1}} = Z^2 = I$$and $$U^{2^{0}} = Z$$ Qiskit Implementation 1Now we will implement the above example in Qiskit.Let's first prepare our environment.
###Code
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, BasicAer
from qiskit.providers.ibmq import least_busy
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Now, set up the quantum circuit. We will use three qubits -- qubits 0 and 1 as ancilla, and qubit 2 as the eigenstate of the unitary operator. We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate.
###Code
q = QuantumRegister(3, 'q')
c = ClassicalRegister(2, 'c')
qpe = QuantumCircuit(q, c)
qpe.x(q[2])
###Output
_____no_output_____
###Markdown
Next, we apply Hadamard gates to the ancilla qubits.
###Code
qpe.h(q[0])
qpe.h(q[1])
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations:
###Code
# controlled unitary from q[0] is the identity matrix
# controlled unitary from q[1] is a controlled-Z gate
qpe.cz(q[1], q[2]);
###Output
_____no_output_____
###Markdown
We apply quantum inverse Fourier transformation to write the phase to the ancilla register. We will use exactly the code that we wrote to do a $QFT$ and adapt it to be that of $QFT^\dagger$.
###Code
def qft(circ, q, n):
"""n-qubit QFT on q in circ."""
for j in range(n):
circ.h(q[j])
for k in range(j+1,n):
circ.cu1(math.pi/float(2**(k-j)), q[k], q[j])
def qft_dagger(circ, q, n):
"""n-qubit QFTdagger on q in circ."""
for j in range(n):
k = (n-1) - j
for m in range(k):
circ.cu1(-math.pi/float(2**(k-m)), q[k], q[m])
circ.h(q[k])
###Output
_____no_output_____
###Markdown
We then measure the ancilla register:
###Code
qft_dagger(qpe, q, 2)
qpe.measure(q[0],c[0])
qpe.measure(q[1],c[1])
qpe.draw(output="mpl")
###Output
_____no_output_____
###Markdown
Qiskit Implementation 2This time, we will see what happens to our estimate of the phase by using more ancilla qubits. Let's go from two ancilla to three, and from a one-qubit state to a two-qubit state. The unitary operator will be $U = C-Z$. Then, for state $\vert11\rangle$, we get a phase of $\theta = 0.5$. We will implement a C-U gate (C-C-Z) from a C-C-X gate (Toffoli) by using the relation $$HXH = Z$$ First, we set up the quantum circuit as before. We initialize $\vert\psi\rangle = \vert11\rangle$ by applying an $X$ gate to both qubits.
###Code
nancilla = 4
q2 = QuantumRegister(nancilla+2, 'q')
c2 = ClassicalRegister(nancilla, 'c')
qpe2 = QuantumCircuit(q2, c2)
qpe2.x(q2[nancilla])
qpe2.x(q2[nancilla+1])
###Output
_____no_output_____
###Markdown
Again, we apply Hadamard gates to the ancilla qubits.
###Code
for i in range(nancilla):
qpe2.h(q2[i])
###Output
_____no_output_____
###Markdown
Next we perform the controlled unitary operations:
###Code
# controlled unitary from q[0] is the identity matrix
# controlled unitary from q[1] is the identity matrix
# controlled unitary from q[2] is a controlled-Z gate
qpe2.h(q2[nancilla+1])
qpe2.ccx(q2[nancilla-1], q2[nancilla], q2[nancilla+1])
qpe2.h(q2[nancilla+1])
###Output
_____no_output_____
###Markdown
As before, we apply an inverse quantum Fourier transform to write the phase to the ancilla register. We then measure the ancilla register.
###Code
qft_dagger(qpe2, q2, nancilla)
for i in range(nancilla):
qpe2.measure(q2[i],c2[i])
qpe2.draw(output="mpl")
###Output
_____no_output_____
###Markdown
Experiment with SimulatorsWe can run the above circuit on the simulator.
###Code
backend = BasicAer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
###Output
_____no_output_____
###Markdown
We indeed see a peak at the binary representation of $2^n\theta = 2^2\times0.5 = 2$ (10).
###Code
results2 = execute(qpe2, backend=backend, shots=shots).result()
answer2 = results2.get_counts()
plot_histogram(answer2)
###Output
_____no_output_____
###Markdown
We indeed see a peak at the binary representation of $2^n\theta = 2^4\times0.5 = 8$ (1000). Experiment with Real DevicesWe can run the circuit for our three-qubit implementation of quantum phase estimation on the real device as shown below.
###Code
# Load our saved IBMQ accounts and get the least busy backend device
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
provider.backends()
backend = provider.get_backend('ibmq_vigo')
from qiskit.tools.monitor import job_monitor
shots = 2048
job_exp = execute(qpe, backend=backend, shots=shots)
job_monitor(job_exp, interval = 2)
# get the results from the computation
results = job_exp.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
###Output
_____no_output_____ |
notebooks/mle_map/toss_coins_MLE_MAP_example.ipynb | ###Markdown
We consider an experiment of tossing a coin n times and counting number of heads. The coin can be unfair, which means that the probability of head is not 0.5.- n - number of coins- k - number of heads- p - probability of head in a single tossWe will use Maximum Likelihood Estimate (MLE) and Maximum A Posteriori Estimate (MAP) to estimate the frequency of heads, to decide if the coin is fair. The probability mass function of the process is given by the Binomial distrbution pmf(k|n,p) = n!/(k! * (n-k)!) p^k (1-p)^(n-k)k is a random variable. p,n are parameters.
###Code
#Not efficient but who cares ;-)
def combCoef(n,k):
a = math.factorial(n)
b = math.factorial(k)
c = math.factorial(n-k)
return a/(b*c)
def binomial_pmf(n,k,p):
return combCoef(n,k) * p**k * (1-p)**(n-k)
def total_binomial_pmf(n,p):
ks = [k for k in range(n+1)]
return (ks, [binomial_pmf(n,k,p) for k in ks])
n = 10
p = 0.5
ks, values = total_binomial_pmf(n,p)
plt.title('pmf(k|n,p) for' + format(' n=%d,p=%.2f'%(n,p)))
plt.xlabel('k number of heads')
plt.scatter(ks, values)
print ('pmf normalization = ' +format('%f'%float(sum(values))))
def total_binomial_likelihood(n,k):
ps= list(np.arange(0,1.,0.05))
return (ps, [binomial_pmf(n,k,p) for p in ps])
n = 10
k = 3
ps,values2 = total_binomial_likelihood(n,k)
plt.title('likelihood function L(p) = pmf(k|n,p) for' + format(' n=%d,k=%d'%(n,k)))
plt.xlabel('p parameter')
plt.plot(ps,values2)
print ('likelihood normalization can be different than one 1 = ' +format('%f'%float(sum(values2)*0.05)))
###Output
likelihood normalization can be different than one 1 = 0.090915
###Markdown
As expected, If we observe in series of n=10 tosses k=3 heads, then the maximum value of the likelihood function L(p) = pmf(k=3|n =10,p) p_MLE = argmax L(p) = 0.3 Alternatively we can say that among several pmfs for different p values (each pmf is a diffrent model models) we see that the maximum values for k=3 is given by the model with p = 0.3.
###Code
ps = [0.1,0.3,0.8]
n = 10
for p in ps:
ks,vals = total_binomial_pmf(n,p)
plt.scatter(ks,vals, label=format('p=%.2f'%p))
plt.ylim(0,1)
plt.xlabel('k number of heads - random variable')
plt.title('pmf(k|n,p) for different p and'+ format(' n=%d'%n))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
If we calculate the estimate analytically we get: p_MLE = k/n We can also check what is the convergence behaviour with number of samples
###Code
def tossCoins(num,p):
return np.random.choice(['H','T'],p=[p,1-p], size=num)
def getHeads(li):
return float(len([x for x in li if x=='H']))
def getSeries(p, samples):
return [getHeads(tossCoins(s,p))/float(s) for s in samples]
samples = [5,10,20,40,80,100,200,400, 1000]
nSamples =10
p=0.4 # true p value.
results=[getSeries(p, samples) for _ in range(nSamples)]
for r in results:
plt.plot(samples, r)
plt.xscale('log')
plt.ylim(0.0,1)
plt.axhline(y=0.5, color='r', linestyle='--')
plt.axhline(y=0.4, color='blue', linestyle='--')
plt.ylabel('frequency of H')
plt.xlabel('n number of coins tossed')
plt.show()
###Output
_____no_output_____
###Markdown
We can see that for small n the estimate can be quite different from true 0.4.This is because e.g. the sequence of T,T,T,T is still probable and it would give estimate p_MLE = 0Now, we move to MAP - we are going to incorporate our assumptions about the distribution of p parameters prior to any experiment. The prior distribution will be modeled by the beta function. pdf(p |alpha, beta) = (alpha-1)!* (beta-1)!/(alpha+beta -1)! * p^(alpha-1) * (1-p)^(beta-1)p here is the random variable, while alpha and beta are hyperparameters.
###Code
def beta_func(p,q):
a = math.factorial(p - 1)
b = math.factorial(q - 1)
c = math.factorial(p + q -1)
return a*b/c
def beta_pdf(p,alpha,beta):
norm = beta_func(alpha,beta)
return p**(alpha-1) * (1-p)**(beta-1)/norm
def total_beta_pdf(alpha,beta):
ps= list(np.arange(0,1.01,0.05))
return (ps, [beta_pdf(p,alpha,beta) for p in ps])
alpha = 4
beta = 4
ps,values2 = total_beta_pdf(alpha,beta)
plt.title('pdf of beta(p|alpha,beta) for' + format(' alpha=%d,beta=%d'%(alpha,beta)))
plt.xlabel('p - hear treated as random variable')
plt.plot(ps,values2)
print ('pdf normalization of p = ' +format('%.2f'%float(sum(values2)* 0.05)))
###Output
pdf normalization of p = 1.00
###Markdown
The nice feature of beta function is that it is conjugate distribution to the binomial distribution.Bayes theorem gives : posterior distribution ~ observation * prior distributionwhich is: pdf(p|alpha +k, beta+ (n-k)) ~ pmf(k|n, p) * pdf(p|alpha, beta) This gives us a possibility to update our prior distribution with every measurement.The posterior distribution becomes our prior for the next observation.MAP estimation consists of finding the p_MAP that would maximize the pdf, so the p valuesfor which the posteriod distribution gives the maximum probability. The maximum is given by mode of the function. p_MLE = argmax pdf(p) = mode(beta_pdf(alpha + k,beta + n-k)) = (alpha+k-1)/(alpha + k + beta + n-k -2) p_MLE = (alpha+k-1)/(alpha + beta +n-2)
###Code
def mode_of_beta(p,q):
return (p - 1.)/(p+q -2.)
# compare MLE with MAP, starting with prior beta(4,4). and assuming true p =0.4
true_p = 0.4
n = 2
alpha = 8.
beta = 4.
alpha_tot=[alpha]
beta_tot=[beta]
n_tot = [0]
heads_tot = [0]
p_MLE=[]
p_MAP=[]
for _ in range(50):
#print('****')
coins = tossCoins(n,true_p)
heads = getHeads(coins)
n_tot.append(n_tot[-1] + n)
heads_tot.append(heads_tot[-1]+heads)
curr_alpha = alpha_tot[-1]
curr_beta = beta_tot[-1]
alpha_tot.append(alpha_tot[-1]+heads)
beta_tot.append(beta_tot[-1]+(n-heads))
p_MLE.append(float(heads_tot[-1])/float(n_tot[-1]))
p_MAP.append(float(alpha_tot[-1] - 1)/float(alpha_tot[-1]+ beta_tot[-1] -2))
#print(coins)
#print(float(heads_tot[-1]))
#print(float(n_tot[-1]))
#print('p_MLE='+format('%f'%(float(heads_tot[-1])/float(n_tot[-1]))))
#print('p_MAP='+format('%f'%(float(alpha_tot[-1])/float(alpha_tot[-1]+ beta_tot[-1]))))
#print('p_MAP_mode='+format('%f'%(float(alpha_tot[-1] -1)/float(alpha_tot[-1]+ beta_tot[-1] -2))))
plt.scatter(n_tot[1:],p_MLE, label='p_MLE')
plt.scatter(n_tot[1:],p_MAP, label='p_MAP')
plt.ylim(0,1)
plt.axhline(y= true_p, color='blue', linestyle='--')
plt.ylabel('frequency of H')
plt.xlabel('n number of coins tossed')
plt.legend()
plt.show()
# Show that for two very different priors with enough large data it tends to be the same
alpha_1 = 8.
beta_1 = 3.
alpha_2 = 3.
beta_2 = 8.
ps,values = total_beta_pdf(alpha_1,beta_1)
ps2,values2 = total_beta_pdf(alpha_2,beta_2)
plt.title('pdf of beta(p|alpha,beta)')
plt.xlabel('p - hear treated as random variable')
plt.plot(ps,values, label = 'alpha=8,beta=3')
plt.plot(ps,values2, label = 'alpha=3,beta=8')
plt.legend()
plt.show()
n = 100
true_p = 0.5
alpha_1 = 8.
beta_1 = 3.
alpha_2 = 3.
beta_2 = 8.
coins = tossCoins(n,true_p)
k = getHeads(coins)
ps,values = total_beta_pdf(alpha_1 + k,beta_1 + (n -k))
ps2,values2 = total_beta_pdf(alpha_2 + k,beta_2 + (n-k))
plt.title('pdf of beta(p|alpha,beta)')
plt.xlabel('p - hear treated as random variable')
plt.plot(ps,values, label = 'alpha1,beta1')
plt.plot(ps,values2, label = 'alpha2,beta2')
plt.axvline(x= true_p, color='blue', linestyle='--')
plt.legend()
plt.show()
#show several different alpha and beta parameters
plt.title('pdf of beta(p|alpha,beta)')
plt.xlabel('p - hear treated as random variable')
ps,values = total_beta_pdf(2,8)
plt.plot(ps,values, label = 'alpha=2,beta=8')
ps,values = total_beta_pdf(3,8)
plt.plot(ps,values, label = 'alpha=3,beta=8')
ps,values = total_beta_pdf(3,5)
plt.plot(ps,values, label = 'alpha=3,beta=5')
ps,values = total_beta_pdf(3,4)
plt.plot(ps,values, label = 'alpha=3,beta=4')
ps,values = total_beta_pdf(4,6)
plt.plot(ps,values, label = 'alpha=4,beta=6')
ps,values = total_beta_pdf(4,4)
plt.plot(ps,values, label = 'alpha=4,beta=4')
ps,values = total_beta_pdf(8,2)
plt.plot(ps,values, label = 'alpha=8,beta=2')
ps,values = total_beta_pdf(8,3)
plt.plot(ps,values, label = 'alpha=8,beta=3')
ps,values = total_beta_pdf(5,3)
plt.legend()
plt.show()
# Show step by step for 2 cases
n = 3
true_p = 0.5
alpha = 7.
beta = 2.
ps,prior_values = total_beta_pdf(alpha,beta)
plt.title('prior distribution')
plt.xlabel('p')
plt.plot(ps,prior_values, label = 'alpha,beta')
plt.show()
for _ in range(5):
coins = tossCoins(n,true_p)
k = getHeads(coins)
print (coins)
ps,observation_values = total_binomial_likelihood(n,k)
plt.title('likelihood function L(p) = pmf(k|n,p) for' + format(' n=%d,k=%d'%(n,k)))
plt.xlabel('p')
plt.plot(ps,observation_values)
plt.show()
alpha = alpha + k
beta = beta + (n -k)
ps,posterior_values = total_beta_pdf(alpha,beta)
plt.title('posterior distribution')
plt.xlabel('p')
plt.plot(ps,posterior_values, label = 'alpha1,beta1')
plt.axvline(x= true_p, color='blue', linestyle='--')
plt.legend()
plt.show()
alpha = alpha + k
beta = beta + (n -k)
###Output
_____no_output_____ |
X_to_Y_Mapping.ipynb | ###Markdown
import kerasimport pandas as pdimport numpy as npfrom keras import models,layers
###Code
import keras
import pandas as pd
import numpy as np
from keras import models,layers
model=models.Sequential()
model.add(layers.Dense(1,input_shape=([1])))
model.compile(optimizer="sgd",loss="mse")
xs=np.array([1.0,2.0,3.0,4.0,5.0,6.0],dtype=float)
ys=np.array([1.0,3.0,5.0,7.0,9.0,11.0],dtype=float)
model.fit(xs,ys,epochs=500)
print(model.predict([7]))
###Output
[[12.857292]]
|
Python Programs for YouTube/1_Introduction/5_operators.ipynb | ###Markdown
Operators Operators are special symbols in Python that carry out arithmetic or logical computation. The value that the operator operates on is called the operand. Operator Types 1. Arithmetic operators 2. Comparison (Relational) operators3. Logical (Boolean) operators4. Bitwise operators5. Assignment operators6. Special operators Arithmetic Operators Arithmetic operators are used to perform mathematical operations like addition, subtraction, multiplication etc. + , -, *, /, %, //, ** are arithmetic operators Example:
###Code
x, y = 10, 20
# #addition
# print(x + y)
#subtraction(-)
# print(x - y)
#multiplication(*)
# print(x * y)
#division(/)
# print(x / y)
#modulo division (%)
# print(x % y)
#Floor Division (//)
# print(x // y)
#Exponent (**)
print(x ** y)
###Output
100000000000000000000
###Markdown
Comparision Operators Comparison operators are used to compare values. It either returns True or False according to the condition. >, =, <= are comparision operators
###Code
a, b = 10, 20
#check a is less than b
# print(a < b)
#check a is greater than b
# print(a > b)
#check a is equal to b
# print(a == b)
#check a is not equal to b (!=)
# print(a != b)
#check a greater than or equal to b
# print(a >= b)
#check a less than or equal to b
print(a <= b)
###Output
True
###Markdown
Logical Operators Logical operators are **and, or, not** operators.
###Code
a, b = True, True
#print a and b
# print(a and b)
#print a or b
# print(a or b)
#print not b
print(not b)
###Output
False
###Markdown
Bitwise operators Bitwise operators act on operands as if they were string of binary digits. It operates bit by bit &, |, ~, ^, >>, << are Bitwise operators
###Code
a, b = -10, 4
#Bitwise AND
# print(a & b)
# #Bitwise OR
# print(a | b)
# #Bitwise NOT
# print(~a)
# #Bitwise XOR
# print(a ^ b)
# #Bitwise rightshift
# print(a >> 1)
# #Bitwise Leftshift
print(a << 2)
###Output
-40
###Markdown
Assignment operators Assignment operators are used in Python to assign values to variables.a = 5 is a simple assignment operator that assigns the value 5 on the right to the variable a on the left. =, +=, -=, *=, /=, %=, //=, **=, &=, |=, ^=, >>=, <<= are Assignment operators
###Code
a = 10
#add AND
# a += 10
#subtract AND (-=)
# a -= 10
#Multiply AND (*=)
# a *= 10
#Divide AND (/=)
# a /= 10
#Modulus AND (%=)
# a %= 10
#Floor Division (//=)
# a //= 10
#Exponent AND (**=)
# a **= 10
print(a)
###Output
10000000000
###Markdown
Special Operators Identity Operators **is and is not** are the identity operators in Python. They are used to check if two values (or variables) are located on the same part of the memory.
###Code
a = 5
b = 5
print(a is b) #5 is object created once both a and b points to same object
#check is not
a = 5
b = 6
print(a is not b) #5 is object created once both a and b points to same object
#check is not
a = 5
b = 6
print(a is b)
#check is not
a = 5
b = 6
print(a is not b)
#check is not
l1 = [1, 2, 3]
l2 = [1, 2, 3]
print(l1 is l2)
l1 = [1, 2, 3]
l2 = [1, 2, 3]
print(l1 is not l2)
id(l1)
id(l2)
s1 = "Satish"
s2 = "Satish"
print(s1 is s2)
s1 = "Satish"
s2 = "Satish"
print(s1 is not s2)
id(s1)
id(s2)
###Output
_____no_output_____
###Markdown
MemberShip Operators **in and not in** are the membership operators in Python. They are used to test whether a value or variable is found in a sequence (string, list, tuple, set and dictionary).
###Code
lst = [1, 2, 3, 4]
print(10 not in lst) #check 1 is present in a given list or not
#check 5 is present in a given list
d = {1: "a", 2: "b"}
print(2 not in d)
###Output
False
|
Numpy/Numpy.ipynb | ###Markdown
Introduction to NUMPY
###Code
import numpy as np
import time
import sys
mylist=[[1,2,3],[1,3,4],[5,4,7]]
mynewlist=np.array(mylist)
mylist
mynewlist
sorted_array=np.arange?
Sorted_Array=np.arange(1,12,2)
Sorted_Array
sorted_array=np.arange(1,11,2)
sorted_array
arr=np.NaN?
arr=np.zeros?
arr
arr=np.zeros((3,3))
arr
###Output
_____no_output_____
###Markdown
Operations on Numpy array
###Code
type(arr)
np.ndim(arr)
np.dtype(arr,int16)
s=range(1000)
print(sys.getsizeof(0)*len(s))
#it is showing the memeoery takken by that list as
#we multiply the size of memory taken by 1 element and multiply with total length
new_array=np.arange(10)
print(new_array.size)
print(new_array.itemsize)
print(new_array.itemsize*len(new_array))
print(new_array.item(1))
print(new_array.item(2))
print(new_array.item(9))
#new_array.partition?
aa=np.arange(10,1,-1)
print(aa)
#aa.partition(5)
#aa
a = np.array([6,2,3,4,5,9,8,7])
a.partition(3)
a
new_array
s
new=np.array([(12,3,4),(1,2,3),(4,3,2)])
new
print(new.ndim) #gives the dimention of array
new1=np.array([(1,2,3),(3,4,5)])
print(new1.ndim)
print(new.itemsize)
#gives the size of single element
print(new.dtype)
#gives the datatype..int 32 bits
a=np.array([1,2,3])
print(new.size)
print(a.size)
print(new1.size)
print(new.shape) #this show rows and coloumns
print(a.shape)
print(new1.shape)
aa=np.array([(1,2,3,4,5),(6,7,8,9),(1,6)])
print(aa)
aa
print(aa.ndim)
print(aa.shape)
aa=aa.reshape(2)
aa
a=np.array([(1,2,3),(5,4,3),(7,8,9),()])
a
a.shape
a.reshape(1,9)
a.reshape(3,2)
a.reshape(3,1)
a
a[0,0]
a[0,2]
#here 1st element of [ , ] selects the rows if 0 then 1st walla if 1 than 2nd walla
#and uske baad walla no decidekrta hai index and jo bhi index dete hai voh print hojaega
a[1,0]
a[0:,2] #this will print the all the rows from 0 too.... and print all the index of 2
a[0:,1] #this will print the all the ements of 1 index
#if we want to print the elements till specific rows...the we have to mention it by using 0:(till me want)
#(OR if we want from 1 to 3 red row then we use 1:4)
array=np.array([(1,2,3),(3,4,5),(5,6,7)])
array[0:2,2]
a
a=np.array([(1,2,3),(3,4,5),(5,6,7)])
#if the no of elements are same then only it will convert into matrix
print(a)
a=np.linspace(1,3,10) #ye 1 to 3 ko 10 parts me divide krke dedega
a
aa=np.array([1,2,3,4])
print(aa.max())
print(aa.sum())
a
array=np.array([(1,2,3),(3,4,5),(3,2,1)])
print(array.sum(axis=1))
#if axis=1 then it will cal sum of all the rows
#if axis=0 iw will operate the coloumns
array
print(array.sum(axis=0))
print(array.sum(axis=2))
#so if hum axis =0 krre hai toh voh har row ke 1st element ka sum dedega
#and if axis=1 krre hai toh voh har row ka sum dedega
array
np.sqrt(array)
np.squeeze(array,axis=0)
x=np.array([[[0],[1],[2]]])
x
x.ndim
x.shape
q=np.squeeze(x)
q
x
q.ndim
np.std(a)
a
a=np.square(a)
a
a=np.array([(1,2,3),(3,4,5)])
b=np.array([(3,2,1),(5,4,3)])
print(a,"\n",b)
print(a+b)
print(a.concat(b))
aa=[1,2,3]
bbb=[3,4,5]
print(aa+bbb)
print(aa,bbb for x,y in range(3))
print(np.vstack((a,b)))
print(np.hstack(a))
a
print(np.hstack((a,b)))
import matplotlib.pyplot as plt
x=np.arange(0,2*np.pi,0.1) #(start,stop,step)
y=np.sin(x)
plt.plot(x,y)
plt.show()
x=np.arange(0,4*np.pi,0.1) #(start,stop,step)
y=np.sin(x)
plt.plot(x,y)
plt.show()
x=np.arange(0,2*np.pi,0.1) #(start,stop,step)
y=np.tan(x)
plt.plot(x,y)
plt.show()
ar=np.array([1,2,3])
print(np.exp(ar)) #this will give all the values whith e ki power SOMETHING
print(np.exp(0))
print(np.log(ar)) #this will give all the values of log(diff values of ar)
def offeringNumber(n, templeHeight):
sum = 0 # Initialize result
# Go through all templs one by one
for i in range(n):
# Go to left while height
# keeps increasing
left = 0
right = 0
for j in range(i - 1, -1, -1):
if (templeHeight[j] < templeHeight[j + 1]):
left += 1
else:
break
# Go to right while height
# keeps increasing
for j in range(i + 1, n):
if (templeHeight[j] < templeHeight[j - 1]):
right += 1
else:
break
# This temple should offer maximum
# of two values to follow the rule.
sum += max(right, left) + 1
return sum
arr1 = [1, 2, 2]
print(offeringNumber(3, arr1))
arr2 = [1, 4, 3, 6, 2, 1]
print(offeringNumber(6, arr2))
###Output
4
10
###Markdown
Machine Learning in Python
Session 01: Numpy
Seyed Mohammad Sajadi
Installing NumPyTo install NumPy, we strongly recommend using a scientific Python distribution. If you’re looking for the full instructions for installing NumPy on your operating system, you can find all of the details here.If you already have Python, you can install NumPy with:
###Code
# !conda install numpy
###Output
_____no_output_____
###Markdown
or
###Code
# !pip install numpy
###Output
_____no_output_____
###Markdown
NumpyNumPy is a Python package which stands for ‘Numerical Python’. It is the core library for scientific computing, which contains a powerful n-dimensional array object, provide tools for integrating C, C++ etc. It is also useful in linear algebra, random number capability etc. NumPy array can also be used as an efficient multi-dimensional container for generic data. Now, let me tell you what exactly is a python numpy array. NumPy ArrayNumpy array is a powerful N-dimensional array object which is in the form of rows and columns. We can initialize numpy arrays from nested Python lists and access it elements. In order to perform these numpy operations, the next question which will come in your mind is: How to import NumPyAny time you want to use a package or library in your code, you first need to make it accessible.In order to start using NumPy and all of the functions available in NumPy, you’ll need to import it. This can be easily done with this import statement:
###Code
import numpy as np
np.__version__
print(np.__version__)
###Output
1.19.5
###Markdown
What’s the difference between a Python list and a NumPy array?NumPy gives you an enormous range of fast and efficient ways of creating arrays and manipulating numerical data inside them. While a Python list can contain different data types within a single list, all of the elements in a NumPy array should be homogeneous. The mathematical operations that are meant to be performed on arrays would be extremely inefficient if the arrays weren’t homogeneous. Why use NumPy?NumPy arrays are faster and more compact than Python lists. An array consumes less memory and is convenient to use. NumPy uses much less memory to store data and it provides a mechanism of specifying the data types. This allows the code to be optimized even further. Creating arrays in numpy
###Code
a = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
print(a)
###Output
[[1 2 3]
[4 5 6]
[7 8 9]]
###Markdown
###Code
print(np.where(a > 5, a, 0))
print(type(a))
###Output
<class 'numpy.ndarray'>
###Markdown
###Code
a
a[0]
a[1,2]
###Output
_____no_output_____
###Markdown
Shape and Reshape in numpy
###Code
print(a.shape)
type(a.shape)
print(a.shape[0])
b = np.array([[1, 2, 3], [4, 5, 6]]) # 2D array (or matrix)
b.shape
b
print(np.reshape(b, (3, 2)))
###Output
[[1 2]
[3 4]
[5 6]]
###Markdown
Reshaping and flattening multidimensional arrays
###Code
print(np.reshape(b, (1, -1))) # -1 means the number of columns will be determined automatically
print(np.reshape(b, (-1, 1))) # -1 means the number of rows will be determined automatically
###Output
[[1]
[2]
[3]
[4]
[5]
[6]]
###Markdown
There are two popular ways to flatten an array: .flatten() and .ravel(). The primary difference between the two is that the new array created using ravel() is actually a reference to the parent array (i.e., a “view”). This means that any changes to the new array will affect the parent array as well. Since ravel does not create a copy, it’s memory efficient. Arrays in numpy
###Code
print(a.ndim)
print(a.dtype)
c = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float64)
print(c.dtype)
c
print(a.size) # number of elements
a.itemsize
print(c.itemsize) # size of each element (in bytes)
###Output
8
###Markdown
arange in numpy
###Code
d1 = np.arange(1, 20, step=3)
print(d1)
np.arange
###Output
_____no_output_____
###Markdown
linspace in numpy
###Code
d2 = np.linspace(1, 2, num=5)
print(d2)
d3 = np.linspace(1, 2, num=11)
print(d3)
###Output
[1. 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2. ]
###Markdown
Creating specific arrays- `np.ones`- `np.zeros`- `np.full`- `np.eye`
###Code
print(np.ones(shape=(3, 2)))
print(np.zeros(shape=(2, 3)))
print(np.zeros((2, 3)))
print(np.zeros(shape=(2, 3), dtype=np.int32))
print(5. * np.ones(shape=(3, 2)))
print(np.full((3, 2), 5))
print(np.eye(4))
print(np.fliplr(np.eye(4)))
print(np.random.rand(3, 2))
###Output
[[0.3202576 0.88752148]
[0.33223222 0.50138199]
[0.85697748 0.36754934]]
###Markdown
Operations on numpy arrays
###Code
x = np.array([[1, 2], [3, 4]], dtype=np.float64)
y = np.array([[5, 6], [7, 8]], dtype=np.float64)
print(x)
print()
print(y)
# Elementwise sum; both produce the array
print(x + y)
print()
print(np.add(x, y))
# Elementwise difference; both produce the array
print(x - y)
print()
print(np.subtract(x, y))
# Elementwise product; both produce the array
print(x * y)
print()
print(np.multiply(x, y))
# Elementwise division; both produce the array
print(x / y)
print()
print(np.divide(x, y))
# Elementwise square root; produces the array
print(np.sqrt(x))
###Output
[[1. 1.41421356]
[1.73205081 2. ]]
###Markdown
Adding, removing, and sorting elements
###Code
arr = np.array([2, 1, 5, 3, 7, 4, 6, 8])
np.sort(arr)
a = np.array([10, 20, 30, 40])
b = np.array([50, 60, 70, 80])
np.concatenate((a, b))
###Output
_____no_output_____
###Markdown
More useful array operations
###Code
a
type(a)
a.sum()
a.min()
a.max()
q = np.array([[1,2],[5,3],[4,6]])
q.max(axis=1)
###Output
_____no_output_____
###Markdown
Flatten() vs. Ravel()
###Code
from IPython.display import Image
Image(filename='ravelvsflatten.JPG')
print(b)
b.flatten()
b.ravel()
q.ravel()
###Output
_____no_output_____
###Markdown
NumpyNumPy is the fundamental package for scientific computing with Python.In Python, data is almost universally represented as NumPy arrays. Even newer tools like Pandas are built around the NumPy array.We will be seeing: 1. 1D array, 2D array 2. Array slices, joins, subsets 3. Arithmetic Operations on 2D arrays 4. Covariance, Correlation Initializing Numpy Arrays 1. Using np.array2. Using np.ndarray
###Code
import numpy as np
np.random.seed(0) # seed for reproducibility
x1 = np.random.randint(10, size=6) # One-dimensional array
x2 = np.random.randint(10, size=(3, 3)) # Two-dimensional array
print(x1)
print(x2)
###Output
_____no_output_____
###Markdown
Using np.array()
###Code
x1 = np.array([1,2,3,4])
print(x1)
print(type(x1))
x2 = np.array([[1,2,3],[4,5,6]])
print(x2)
print(type(x2))
###Output
[[1 2 3]
[4 5 6]]
<class 'numpy.ndarray'>
###Markdown
Using np.ndarray()
###Code
x = np.ndarray(shape=(1,3),dtype=int, buffer = np.array([1,2,3]))
print(x)
x = np.append(x,5)
print(x)
print(type(x))
x = np.ndarray(shape=(2,2), dtype=float,buffer = np.array([[1.4,2.5],[1.3,2.4]]))
print(x)
x = np.ndarray(shape=(2,2), dtype=int,buffer = np.array([[1,2],[1,2]]))
print(x)
###Output
[[1 2]
[1 2]]
###Markdown
Attributes of ndarray Each array has attributes `ndim` (the number of dimensions), `shape` (the size of each dimension), and `size` (the total size of the array):
###Code
x2 = np.array([[1,2,3],[4,5,6]])
print("x2.ndim = ",x2.ndim)
print("x2.shape = ",x2.shape)
print("x2.size = ",x2.size)
###Output
x2.ndim = 2
x2.shape = (2, 3)
x2.size = 6
###Markdown
Another useful attribute is the `dtype` which tells you about the type of elements in the array:
###Code
print("x2.dtype = ",x2.dtype)
###Output
x2.dtype = int64
###Markdown
Other attributes include `itemsize`, which lists the size (in bytes) of each array element, and `nbytes`, which lists the total size (in bytes) of the array:
###Code
print("itemsize:", x2.itemsize, "bytes")
print("nbytes:", x2.nbytes, "bytes")
###Output
itemsize: 8 bytes
nbytes: 48 bytes
###Markdown
Array Indexing: Accessing Single Elements In a one-dimensional array, the ith value (counting from zero) can be accessed by specifying the desired index in square brackets, just as with Python lists:
###Code
x1 = np.array([1,2,3,4])
print("x1 = ",x1)
print("x1[0] = ",x1[0]) # just like arrays in c/c++
print("x1[-1] = ",x1[-1]) # negative indexing just like lists
###Output
x1 = [1 2 3 4]
x1[0] = 1
x1[-1] = 4
###Markdown
In a multi-dimensional array, items can be accessed using a comma-separated tuple of indices:
###Code
print("x2 = "); print(x2)
print("x2[0] = ",x2[0]) # will print the entire 1st row
# to print 1st element of 1st row
print("x2[0][0]= ", x2[0][0])
print("x2[0,0] = ",x2[0,0])
# to print 2nd element of 3rd row
print("x2[2][1]= ", x2[2][1])
print("x2[2,1]= ", x2[2,1])
###Output
_____no_output_____
###Markdown
Values can also be modified using any of the above index notation:
###Code
x2[0, 0] = 12
print(x2)
x2[0][0] = 14
print(x2)
###Output
_____no_output_____
###Markdown
**NOTE :** Unlike Python lists, NumPy arrays have a fixed type. This means, for example, if you attempt to insert a floating-point value to an integer array, the value will be silently truncated.i.e the float value gets converted to nearest int value.
###Code
x1 = np.ndarray(5, buffer = np.array([1,2,3,4,5]),dtype = int)
print(x1)
x1[2] = 5.7
print("x1 after changing : ",x1)
###Output
_____no_output_____
###Markdown
Array Slicing and Subsetting : Accessing Subarrays Just as we can use square brackets to access individual array elements, we can also use them to access subarrays with the slice notation, marked by the colon (:) character. The NumPy slicing syntax follows that of the standard Python list; to access a slice of an array x, use :```pythonx[start:stop:step]```**NOTE :** Default value of `start` is `0`, `stop` is `size of object` and `step` is `1`.
###Code
x = np.ndarray(10, buffer = np.array([0,1,2,3,4,5,6,7,8,9]),dtype = int)
print(x)
print("x[:] = ",x[:])
print("x[:5] = ",x[:5])
print("x[5:] = ",x[5:])
print("x[1:5] = ",x[1:5])
print("x[1:5:2] = ",x[1:5:2])
print("x[::-1] = ",x[::-1])
np.random.seed(0) # seed for reproducibility
x2 = np.random.randint(10, size=(3, 3))
print(x2)
print("x2[:2, :3] = "); print(x2[:2, :3]) # first two rows, first three columns
print("x2[:3, ::2] = "); print(x2[:3, ::2]) # all rows, every other column
print("x2[::-1, ::-1] = "); print(x2[::-1, ::-1]) #reversed 2D array
print("x2[:, 0] = ",x2[:, 0]) # first column of x2
###Output
_____no_output_____
###Markdown
Joining Two Arrays Joining of two arrays in NumPy, is primarily accomplished using the routine `np.concatenate`.
###Code
x = np.array([1,2,3])
y = np.array([4,5,6])
z = np.concatenate([x,y]) # Combines x and y to give one array.
# a = np.concatenate([x,y,z])
print(z)
x2 = np.array([[1,2,3],[2,3,4]])
y2 = np.array([[3,4,5],[4,5,6]])
z2 = np.concatenate([x2,y2])
print(z2)
###Output
_____no_output_____
###Markdown
Arithmetic Operations on 2D arrays | Operator | Equivalent Function ||---------- |--------------------- || + | np.add || - | np.subtract || * | np.multiply |
###Code
x2 = np.array([[1,2,3],[2,3,4]])
y2 = np.array([[3,4,5],[4,5,6]])
print("x2 + y2 = "); print(np.add(x2,y2))
print("x2 - y2 = "); print(np.subtract(x2,y2))
print("x2 * y2 = "); print(np.multiply(x2,y2))
###Output
_____no_output_____
###Markdown
**NOTE :** Here as you can see, matrix multiplication isn't actually possible and we actually just get a matrix where `x2[i,j] * y2[1,j]` is the output.
###Code
x2 = np.array([[1,2,3],[2,3,4]])
y2 = np.array([[3,4,5],[4,5,6]])
print(np.matmul(x2,y2))
x2 = np.array([[1,2,3],[2,3,4]])
y2 = np.array([[3,4],[4,5],[5,6]])
print(np.matmul(x2,y2))
###Output
_____no_output_____
###Markdown
CovarianceCovariance indicates the level to which two variables vary together. If we examine N-dimensional samples, `X = [x_1, x_2, ... x_N]^T`, then the covariance matrix element `C_{ij}` is the covariance of `x_i` and `x_j`. The element `C_{ii}` is the variance of `x_i`.
###Code
x2 = np.array([[0,1,2],[2,1,0]])
print(x2)
###Output
_____no_output_____
###Markdown
Note here how `x[0]` increases and `x[1]` decreases. This is also shown by the covariance matrix :
###Code
print(np.cov(x2))
###Output
_____no_output_____
###Markdown
Note that again, `C[0,1]` and `C[1,0]` which shows the correlation between `x[0]` and `x[1]`, is negative.However `C[0,0]` and `C[1,1]` which show the correlation between `x[0]` and `x[0]` and `x[1]` and `x[1]` is 1. Note how x and y are combined:
###Code
x = np.array([-2.1, -1, 4.3])
y = np.array([3, 1.1, 0.12])
X = np.stack((x, y))
print(np.cov(X))
# To check
print(np.cov(x))
print(np.cov(y))
###Output
_____no_output_____
###Markdown
Correlation The term "correlation" refers to a mutual relationship or association between quantities.It is a standardised form of Covariance. Correlation Coeffecients take values between `[-1,1]`In Numpy (and in general), Correlation Matrix refers to the normalised version of a Covariance matrix.
###Code
x = np.array([-2.1, -1, 4.3])
y = np.array([3, 1.1, 0.12])
X = np.stack((x, y))
print(np.corrcoef(X))
###Output
_____no_output_____
###Markdown
Linear Regression Linear regression is used for finding linear relationship between target and one or more predictors. Simple linear regression is useful for finding relationship between two continuous variables. One is predictor or independent variable and other is response or dependent variable. scipy.stats.linregressscipy.stats.linregress(x, y=None) Parameters: x, y : array_like Returns: slope : float> slope of the regression lineintercept : float> intercept of the regression liner-value : float> correlation coefficientp-value : float> two-sided p-value for a hypothesis test whose null hypothesis is that the slope is zero.stderr : float> Standard error of the estimate
###Code
from scipy import stats
x = np.random.random(10)
y = np.random.random(10)
print(x); print(y)
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
slope
###Output
_____no_output_____
###Markdown
NumPy Arrays
###Code
my_list = [1,2,3]
import numpy as np
arr = np.array(my_list)
arr
marks = [[1,2,3],[4,5,6],[7,8,9]]
np.array(marks)
np.arange(0,10)
np.arange(0,11,2)
np.zeros(3)
np.zeros((2,3))
np.ones(3)
np.ones((2,3))
np.linspace(0,5)
np.eye(3)
np.random.rand(3)
np.random.rand(3,3)
np.random.randn(3)
np.random.randn(3,3)
np.random.randint(1, 100) # with lowest exclusive and highest exclusive
np.random.randint(1, 100, 10)
arr = np.arange(25)
arr
ranarr = np.random.randint(1, 50, 10)
ranarr
arr.reshape(5,5)
ranarr.max()
ranarr.min()
ranarr.argmax() # Index value of the max element in the array
ranarr
ranarr.argmin()
arr = arr.reshape(5,5)
arr.shape
###Output
_____no_output_____
###Markdown
NumPy Indexing and Selection
###Code
import numpy as np
arr = np.arange(0,11)
arr
arr[8]
arr[1:5]
arr[:6]
arr[5:]
arr[0:5] = 100
arr
slice_of_arr = arr[0:6]
slice_of_arr
slice_of_arr[:]
slice_of_arr[:] = 99
slice_of_arr
arr
arr_copy = arr.copy()
arr_copy
arr
arr_copy[:] = 100
arr_copy
arr
arr_2d = np.array([[5,10,15],[20,25,30],[35,40,45]])
arr_2d
arr_2d[0][0]
arr_2d[2][1]
arr_2d[2,1]
arr_2d[:2,1:]
arr_2d[1:2]
arr_2d[:,0]
arr = np.arange(0,11)
arr
arr > 5
arr[arr > 5]
arr_2d = np.arange(50).reshape(5,10)
arr_2d
arr_2d[1:3, 3:5]
###Output
_____no_output_____
###Markdown
NumPy Operations- Array to Array- Array to Scalars- Universal Array Functions
###Code
# import numpy as np
arr = np.arange(11)
arr
arr + arr
arr - arr
arr * arr
arr + 100
arr - 100
arr * 100
arr
arr ** 2
np.sqrt(arr)
np.exp(arr)
np.max(arr)
np.sin(arr)
np.cos(arr)
np.log(arr)
###Output
C:\Users\user\AppData\Local\Temp/ipykernel_5324/3120950136.py:1: RuntimeWarning: divide by zero encountered in log
np.log(arr)
###Markdown
NumPy: Numerical Arrays for Python **Learning Objectives:** Learn how to create, transform and visualize multidimensional data of a single type using Numpy. NumPy is the foundation for scientific computing and data science in Python. Its more data object is a multidimensional array with the following characteristics:* Any number of dimensions* All elements of an array have the same data type* Array elements are usually native data dtype* The memory for an array is a contiguous block that can be easily passed to other numerical libraries (BLAS, LAPACK, etc.).* Most of NumPy is implemented in C, so it is fast. Plotting While this notebook doesn't focus on plotting, Matplotlib will be used to make a few basic plots.
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
The `vizarray` package will be used to visualize NumPy arrays:
###Code
import antipackage
from github.ellisonbg.misc import vizarray as va
###Output
_____no_output_____
###Markdown
Multidimensional array type This is the canonical way you should import Numpy:
###Code
import numpy as np
data = [0,2,4,6]
a = np.array(data)
type(a)
a
###Output
_____no_output_____
###Markdown
The `vz.vizarray` function can be used to visualize a 1d or 2d NumPy array using a colormap:
###Code
va.vizarray(a)
###Output
_____no_output_____
###Markdown
The shape of the array:
###Code
a.shape
###Output
_____no_output_____
###Markdown
The number of array dimensions:
###Code
a.ndim
###Output
_____no_output_____
###Markdown
The number of array elements:
###Code
a.size
###Output
_____no_output_____
###Markdown
The number of bytes the array takes up:
###Code
a.nbytes
###Output
_____no_output_____
###Markdown
The `dtype` attribute describes the "data type" of the elements:
###Code
a.dtype
###Output
_____no_output_____
###Markdown
Creating arrays Arrays can be created with nested lists or tuples:
###Code
data = [[0.0,2.0,4.0,6.0],[1.0,3.0,5.0,7.0]]
b = np.array(data)
b
b[0,0]
b[1,1]
b[1,2]
b[1,3]
b[0,1]
va.vizarray(b)
b.shape, b.ndim, b.size, b.nbytes
###Output
_____no_output_____
###Markdown
The `arange` function is similar to Python's builtin `range` function, but creates an array:
###Code
c = np.arange(0.0, 10.0, 1.0) # Step size of 1.0
c
###Output
_____no_output_____
###Markdown
The `linspace` function is similar, but allows you to specify the number of points:
###Code
e = np.linspace(0.0, 5.0, 11) # 11 points
e
###Output
_____no_output_____
###Markdown
There are also `empty`, `zeros` and `ones` functions:
###Code
np.empty((4,4))
np.zeros((3,3))
np.ones((3,3))
###Output
_____no_output_____
###Markdown
See also:* `empty_like`, `ones_like`, `zeros_like`* `eye`, `identity`, `diag` dtype Arrays have a `dtype` attribute that encapsulates the "data type" of each element. It can be set:* Implicitely by the element type* By passing the `dtype` argument to an array creation functionHere is an integer valued array:
###Code
a = np.array([0,1,2,3])
a, a.dtype
###Output
_____no_output_____
###Markdown
All array creation functions accept an optional `dtype` argument:
###Code
b = np.zeros((2,2), dtype=np.complex64)
b
c = np.arange(0, 10, 2, dtype=np.float)
c
###Output
_____no_output_____
###Markdown
You can use the `astype` method to create a copy of the array with a given `dtype`:
###Code
d = c.astype(dtype=np.int)
d
###Output
_____no_output_____
###Markdown
IPython's tab completion is useful for exploring the various available `dtypes`:
###Code
np.float*?
###Output
_____no_output_____
###Markdown
The NumPy documentation on [dtypes](http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html) describes the many other ways of specifying dtypes. Array operations Basic mathematical operations are **elementwise** for:* Scalars and arrays* Arrays and arraysFill an array with a value:
###Code
a = np.empty((3,3))
a.fill(0.1)
a
b = np.ones((3,3))
b
###Output
_____no_output_____
###Markdown
Addition is elementwise:
###Code
a+b
###Output
_____no_output_____
###Markdown
Division is elementwise:
###Code
b/a
###Output
_____no_output_____
###Markdown
As are powers:
###Code
a**2
###Output
_____no_output_____
###Markdown
Scalar multiplication is also elementwise:
###Code
np.pi*b
###Output
_____no_output_____
###Markdown
Indexing and slicing Indexing and slicing provide an efficient way of getting the values in an array and modifying them.
###Code
a = np.random.rand(10,10)
###Output
_____no_output_____
###Markdown
The `enable` function is part of `vizarray` and enables a nice display of arrays:
###Code
va.enable()
a
###Output
_____no_output_____
###Markdown
List Python lists and tuples, NumPy arrays have zero-based indexing and use the `[]` syntax for getting and setting values:
###Code
a[0,0]
###Output
_____no_output_____
###Markdown
An index of `-1` refers to the last element along that axis:
###Code
a[-1,-1] == a[9,9]
###Output
_____no_output_____
###Markdown
Extract the 0th column using the `:` syntax, which denotes all elements along that axis.
###Code
a[:,0]
###Output
_____no_output_____
###Markdown
The last row:
###Code
a[-1,:]
###Output
_____no_output_____
###Markdown
You can also slice ranges:
###Code
a[0:2,0:2]
###Output
_____no_output_____
###Markdown
Assignment also works with slices:
###Code
a[0:5,0:5] = 1.0
a
###Output
_____no_output_____
###Markdown
Note how even though we assigned the value to the slice, the original array was changed. This clarifies that slices are **views** of the same data, not a copy.
###Code
va.disable()
###Output
_____no_output_____
###Markdown
Boolean indexing Arrays can be indexed using other arrays that have boolean values.
###Code
ages = np.array([23,56,67,89,23,56,27,12,8,72])
genders = np.array(['m','m','f','f','m','f','m','m','m','f'])
ages.size
genders.size
###Output
_____no_output_____
###Markdown
Boolean expressions involving arrays create new arrays with a `bool` dtype and the elementwise result of the expression:
###Code
ages > 30
genders == 'm'
###Output
_____no_output_____
###Markdown
Boolean expressions provide an extremely fast and flexible way of querying arrays:
###Code
(ages > 10) & (ages < 50)
###Output
_____no_output_____
###Markdown
You can use a boolean array to index into the original or another array. This selects the ages of all females in the `genders` array:
###Code
mask = (genders == 'f')
ages[mask]
ages[ages>30]
###Output
_____no_output_____
###Markdown
Reshaping, transposing
###Code
va.enable()
a = np.random.rand(3,4)
a
###Output
_____no_output_____
###Markdown
The `T` atrribute contains the transpose of the original array:
###Code
a.T
###Output
_____no_output_____
###Markdown
The `reshape` method can be used to change the shape and even the number of dimensions:
###Code
a.reshape(2,6)
a.reshape(6,2)
###Output
_____no_output_____
###Markdown
The `ravel` method strings the array out in one dimension:
###Code
a.ravel()
va.disable()
###Output
_____no_output_____
###Markdown
Universal functions Universal function, or "ufuncs," are functions that take and return arrays or scalars. They have the following characteristics:* Vectorized C implementations, much faster than hand written loops in Python* Allow for concise Pythonic code* Here is a complete list of the [available NumPy ufuncs](http://docs.scipy.org/doc/numpy/reference/ufuncs.htmlavailable-ufuncs) lists the available ufuncs.
###Code
va.set_block_size(5)
va.enable()
###Output
_____no_output_____
###Markdown
Here is a linear sequence of values"
###Code
t = np.linspace(0.0, 4*np.pi, 100)
t
###Output
_____no_output_____
###Markdown
Take the $sin$ of each element of the array:
###Code
np.sin(t)
###Output
_____no_output_____
###Markdown
As the next two examples show, multiple ufuncs can be used to create complex mathematical expressions that can be computed efficiently:
###Code
np.exp(np.sqrt(t))
va.disable()
va.set_block_size(30)
plt.plot(t, np.exp(-0.1*t)*np.sin(t))
###Output
_____no_output_____
###Markdown
In general, you should always try to use ufuncs rather than do computations using for loops. These types of array based computations are referred to as *vectorized*. Basic data processing
###Code
ages = np.array([23,56,67,89,23,56,27,12,8,72])
genders = np.array(['m','m','f','f','m','f','m','m','m','f'])
###Output
_____no_output_____
###Markdown
Numpy has a basic set of methods and function for computing basic quantities about data.
###Code
ages.min(), ages.max()
###Output
_____no_output_____
###Markdown
Compute the mean:
###Code
ages.mean()
###Output
_____no_output_____
###Markdown
Compute the variance and standard deviation:
###Code
ages.var(), ages.std()
###Output
_____no_output_____
###Markdown
The `bincount` function counts how many times each value occurs in the array:
###Code
np.bincount(ages)
###Output
_____no_output_____
###Markdown
The `cumsum` and `cumprod` methods compute cumulative sums and products:
###Code
ages.cumsum()
ages.cumprod()
###Output
_____no_output_____
###Markdown
Most of the functions and methods above take an `axis` argument that will apply the action along a particular axis:
###Code
a = np.random.randint(0,10,(3,4))
a
###Output
_____no_output_____
###Markdown
With `axis=0` the action takes place along rows:
###Code
a.sum(axis=0)
###Output
_____no_output_____
###Markdown
With `axis=1` the action takes place along columns:
###Code
a.sum(axis=1)
###Output
_____no_output_____
###Markdown
The `unique` function is extremely useful in working with categorical data:
###Code
np.unique(genders)
np.unique(genders, return_counts=True)
###Output
_____no_output_____
###Markdown
The where function allows you to apply conditional logic to arrays. Here is a rough sketch of how it works:```pythondef where(condition, if_false, if_true):```
###Code
np.where(ages>30, 0, 1)
###Output
_____no_output_____
###Markdown
The `if_false` and `if_true` values can be arrays themselves:
###Code
np.where(ages<30, 0, ages)
###Output
_____no_output_____
###Markdown
File IO NumPy has a a number of different function to reading and writing arrays to and from disk. Single array, binary format
###Code
a = np.random.rand(10)
a
###Output
_____no_output_____
###Markdown
Save the array to a binary file named `array1.npy`:
###Code
np.save('array1', a)
ls
###Output
array1.npy Day05.ipynb Numpy.ipynb
###Markdown
Using `%pycat` to look at the file shows that it is binary:
###Code
%pycat array1.npy
###Output
_____no_output_____
###Markdown
Load the array back into memory:
###Code
a_copy = np.load('array1.npy')
a_copy
###Output
_____no_output_____
###Markdown
Single array, text format
###Code
b = np.random.randint(0,10,(5,3))
b
###Output
_____no_output_____
###Markdown
The `savetxt` function saves arrays in a simple, textual format that is less effecient, but easier for other languges to read:
###Code
np.savetxt('array2.txt', b)
ls
###Output
array1.npy array2.txt Day05.ipynb Numpy.ipynb
###Markdown
Using `%pycat` to look at the contents shows that the files is indeed a plain text file:
###Code
%pycat array2.txt
np.loadtxt('array2.txt')
###Output
_____no_output_____
###Markdown
Multiple arrays, binary format The `savez` function provides an efficient way of saving multiple arrays to a single file:
###Code
np.savez('arrays.npz', a=a, b=b)
###Output
_____no_output_____
###Markdown
The `load` function returns a dictionary like object that provides access to the individual arrays:
###Code
a_and_b = np.load('arrays.npz')
a_and_b['a']
a_and_b['b']
###Output
_____no_output_____
###Markdown
Linear algebra NumPy has excellent linear algebra capabilities.
###Code
a = np.random.rand(5,5)
b = np.random.rand(5,5)
###Output
_____no_output_____
###Markdown
Remember that array operations are elementwise. Thus, this is **not** matrix multiplication:
###Code
a*b
###Output
_____no_output_____
###Markdown
To get matrix multiplication use `np.dot`:
###Code
np.dot(a, b)
###Output
_____no_output_____
###Markdown
Or, NumPy as a `matrix` subclass for which matrix operations are the default:
###Code
m1 = np.matrix(a)
m2 = np.matrix(b)
m1*m2
###Output
_____no_output_____
###Markdown
The `np.linalg` package has a wide range of fast linear algebra operations.Here is determinant:
###Code
np.linalg.det(a)
###Output
_____no_output_____
###Markdown
Matrix inverse:
###Code
np.linalg.inv(a)
###Output
_____no_output_____
###Markdown
Eigenvalues:
###Code
np.linalg.eigvals(a)
###Output
_____no_output_____
###Markdown
NumPy can be built against fast BLAS/LAPACK implementation for these linear algebra operations.
###Code
c = np.random.rand(2000,2000)
%timeit -n1 -r1 evs = np.linalg.eigvals(c)
###Output
1 loops, best of 1: 35.7 s per loop
###Markdown
Random numbers NumPy has functions for creating arrays of random numbers from different distributions in `np.random`, as well as handling things like permutation, shuffling, and choosing.Here is the [numpy.random documentation](http://docs.scipy.org/doc/numpy/reference/routines.random.html).
###Code
plt.hist(np.random.random(250))
plt.title('Uniform Random Distribution $[0,1]$')
plt.xlabel('value')
plt.ylabel('count')
plt.hist(np.random.randn(250))
plt.title('Standard Normal Distribution')
plt.xlabel('value')
plt.ylabel('count')
###Output
_____no_output_____
###Markdown
The `shuffle` function shuffles an array in place:
###Code
a = np.arange(0,10)
np.random.shuffle(a)
a
###Output
_____no_output_____
###Markdown
The `permutation` function does the same thing but first makes a copy:
###Code
a = np.arange(0,10)
print(np.random.permutation(a))
print(a)
###Output
[7 8 9 5 2 3 6 1 0 4]
[0 1 2 3 4 5 6 7 8 9]
###Markdown
The `choice` function provides a powerful way of creating synthetic data sets of discrete data:
###Code
np.random.choice(['m','f'], 20, p=[0.25,0.75])
###Output
_____no_output_____
###Markdown
Numpy Array Creation
###Code
list1 = [10,20,30,40,50,60]
list1
# Display the type of an object
type(list1)
#Convert list to Numpy Array
arr1 = np.array(list1)
arr1
#Memory address of an array object
arr1.data
# Display type of an object
type(arr1)
#Datatype of array
arr1.dtype
# Convert Integer Array to FLOAT
arr1.astype(float)
# Generate evenly spaced numbers (space =1) between 0 to 10
np.arange(0,10)
# Generate numbers between 0 to 100 with a space of 10
np.arange(0,100,10)
# Generate numbers between 10 to 100 with a space of 10 in descending order
np.arange(100, 10, -10)
#Shape of Array
arr3 = np.arange(0,10)
arr3.shape
arr3
# Size of array
arr3.size
# Dimension
arr3.ndim
# Datatype of object
arr3.dtype
# Bytes consumed by one element of an array object
arr3.itemsize
# Bytes consumed by an array object
arr3.nbytes
# Length of array
len(arr3)
# Generate an array of zeros
np.zeros(10)
# Generate an array of ones with given shape
np.ones(10)
# Repeat 10 five times in an array
np.repeat(10,5)
# Repeat each element in array 'a' thrice
a= np.array([10,20,30])
np.repeat(a,3)
# Array of 10's
np.full(5,10)
# Generate array of Odd numbers
ar1 = np.arange(1,20)
ar1[ar1%2 ==1]
# Generate array of even numbers
ar1 = np.arange(1,20)
ar1[ar1%2 == 0]
# Generate evenly spaced 4 numbers between 10 to 20.
np.linspace(10,20,4)
# Generate evenly spaced 11 numbers between 10 to 20.
np.linspace(10,20,11)
# Create an array of random values
np.random.random(4)
# Generate an array of Random Integer numbers
np.random.randint(0,500,5)
# Generate an array of Random Integer numbers
np.random.randint(0,500,10)
# Using random.seed we can generate same number of Random numbers
np.random.seed(123)
np.random.randint(0,100,10)
# Using random.seed we can generate same number of Random numbers
np.random.seed(123)
np.random.randint(0,100,10)
# Using random.seed we can generate same number of Random numbers
np.random.seed(101)
np.random.randint(0,100,10)
# Using random.seed we can generate same number of Random numbers
np.random.seed(101)
np.random.randint(0,100,10)
# Generate array of Random float numbers
f1 = np.random.uniform(5,10, size=(10))
f1
# Extract Integer part
np.floor(f1)
# Truncate decimal part
np.trunc(f1)
# Convert Float Array to Integer array
f1.astype(int)
# Normal distribution (mean=0 and variance=1)
b2 =np.random.randn(10)
b2
arr1
# Enumerate for Numpy Arrays
for index, value in np.ndenumerate(arr1):
print(index, value)
###Output
(0,) 10
(1,) 20
(2,) 30
(3,) 40
(4,) 50
(5,) 60
###Markdown
Operations on an Array
###Code
arr2 = np.arange(1,20)
arr2
# Sum of all elements in an array
arr2.sum()
# Cumulative Sum
np.cumsum(arr2)
# Find Minimum number in an array
arr2.min()
# Find MAX number in an array
arr2.max()
# Find INDEX of Minimum number in an array
arr2.argmin()
# Find INDEX of MAX number in an array
arr2.argmax()
# Find mean of all numbers in an array
arr2.mean()
# Find median of all numbers present in arr2
np.median(arr2)
# Variance
np.var(arr2)
# Standard deviation
np.std(arr2)
# Calculating percentiles
np.percentile(arr2,70)
# 10th & 70th percentile
np.percentile(arr2,[10,70])
###Output
_____no_output_____
###Markdown
Operations on a 2D Array
###Code
A = np.array([[1,2,3,0] , [5,6,7,22] , [10 , 11 , 1 ,13] , [14,15,16,3]])
A
# SUM of all numbers in a 2D array
A.sum()
# MAX number in a 2D array
A.max()
# Minimum
A.min()
# Column wise mimimum value
np.amin(A, axis=0)
# Row wise mimimum value
np.amin(A, axis=1)
# Mean of all numbers in a 2D array
A.mean()
# Mean
np.mean(A)
# Median
np.median(A)
# 50 percentile = Median
np.percentile(A,50)
np.var(A)
np.std(A)
np.percentile(arr2,70)
# Enumerate for Numpy 2D Arrays
for index, value in np.ndenumerate(A):
print(index, value)
###Output
(0, 0) 1
(0, 1) 2
(0, 2) 3
(0, 3) 0
(1, 0) 5
(1, 1) 6
(1, 2) 7
(1, 3) 22
(2, 0) 10
(2, 1) 11
(2, 2) 1
(2, 3) 13
(3, 0) 14
(3, 1) 15
(3, 2) 16
(3, 3) 3
###Markdown
Reading elements of an array
###Code
a = np.array([7,5,3,9,0,2])
# Access first element of the array
a[0]
# Access all elements of Array except first one.
a[1:]
# Fetch 2nd , 3rd & 4th value from the Array
a[1:4]
# Get last element of the array
a[-1]
a[-3]
a[-6]
a[-3:-1]
###Output
_____no_output_____
###Markdown
Replace elements in array
###Code
ar = np.arange(1,20)
ar
# Replace EVEN numbers with ZERO
rep1 = np.where(ar % 2 == 0, 0 , ar)
print(rep1)
ar2 = np.array([10, 20 , 30 , 10 ,10 ,20, 20])
ar2
# Replace 10 with value 99
rep2 = np.where(ar2 == 10, 99 , ar2)
print(rep2)
p2 = np.arange(0,100,10)
p2
# Replace values at INDEX loc 0,3,5 with 33,55,99
np.put(p2, [0, 3 , 5], [33, 55, 99])
p2
###Output
_____no_output_____
###Markdown
Missing Values in an array
###Code
a = np.array([10 ,np.nan,20,30,60,np.nan,90,np.inf])
a
# Search for missing values and return as a boolean array
np.isnan(a)
# Index of missing values in an array
np.where(np.isnan(a))
# Replace all missing values with 99
a[np.isnan(a)] = 99
a
# Check if array has any NULL value
np.isnan(a).any()
A = np.array([[1,2,np.nan,4] , [np.nan,6,7,8] , [10 , np.nan , 12 ,13] , [14,15,16,17]])
A
# Search for missing values and return as a boolean array
np.isnan(A)
# Index of missing values in an array
np.where(np.isnan(A))
###Output
_____no_output_____
###Markdown
Stack Arrays Vertically
###Code
a = np.zeros(20).reshape(2,-1)
b = np.repeat(1, 20).reshape(2,-1)
a
b
np.vstack([a,b])
a1 = np.array([[1], [2], [3]])
b1 = np.array([[4], [5], [6]])
a1
b1
np.vstack([a1,b1])
###Output
_____no_output_____
###Markdown
Stack Arrays Horizontally
###Code
np.hstack([a,b])
np.hstack([a1,b1])
### hstack & vstack
arr1 = np.array([[7,13,14],[18,10,17],[11,12,19]])
arr2= np.array([16,6,1])
arr3= np.array([[5,8,4,3]])
np.hstack((np.vstack((arr1,arr2)),np.transpose(arr3)))
###Output
_____no_output_____
###Markdown
Common items between two Arrays
###Code
c1 = np.array([10,20,30,40,50,60])
c2 = np.array([12,20,33,40,55,60])
np.intersect1d(c1,c2)
###Output
_____no_output_____
###Markdown
Remove Common Elements
###Code
# Remove common elements of C1 & C2 array from C1
np.setdiff1d(c1,c2)
###Output
_____no_output_____
###Markdown
Process Elements on Conditions
###Code
a = np.array([1,2,3,6,8])
b = np.array([10,2,30,60,8])
np.where(a == b) # returns the indices of elements in an input array where the given condition is satisfied.
# Return an array where condition is satisfied
a[np.where(a == b)]
# Return all numbers betweeen 20 & 35
a1 = np.arange(0,60)
a1[np.where ((a1>20) & (a1<35))]
# Return all numbers betweeen 20 & 35 OR numbers divisible by 10
a1 = np.arange(0,60)
a1[np.where (((a1>20) & (a1<35)) | (a1 % 10 ==0)) ]
# Return all numbers betweeen 20 & 35 using np.logical_and
a1[np.where(np.logical_and(a1>20, a1<35))]
###Output
_____no_output_____
###Markdown
Check for elements in an Array using isin()
###Code
a = np.array([10,20,30,40,50,60,70])
a
# Check whether number 11 & 20 are present in an array
np.isin(a, [11,20])
#Display the matching numbers
a[np.isin(a,20)]
# Check whether number 33 is present in an array
np.isin(a, 33)
a[np.isin(a, 33)]
b = np.array([10,20,30,40,10,10,70,80,70,90])
b
# Check whether number 10 & 70 are present in an array
np.isin(b, [10,70])
# Display the indices where match occurred
np.where(np.isin(b, [10,70]))
# Display the matching values
b[np.where(np.isin(b, [10,70]))]
# Display the matching values
b[np.isin(b, [10,70])]
###Output
_____no_output_____
###Markdown
Reverse Array
###Code
a4 = np.arange(10,30)
a4
# Reverse the array
a4[::-1]
# Reverse the array
np.flip(a4)
a3 = np.array([[3,2,8,1] , [70,50,10,67] , [45,25,75,15] , [12,9,77,4]])
a3
# Reverse ROW positions
a3[::-1,]
# Reverse COLUMN positions
a3[:,::-1]
# Reverse both ROW & COLUMN positions
a3[::-1,::-1]
###Output
_____no_output_____
###Markdown
Sorting Array
###Code
a = np.array([10,5,2,22,12,92,17,33])
# Sort array in ascending order
np.sort(a)
a3 = np.array([[3,2,8,1] , [70,50,10,67] , [45,25,75,15]])
a3
# Sort along rows
np.sort(a3)
# Sort along rows
np.sort(a3,axis =1)
# Sort along columns
np.sort(a3,axis =0)
# Sort in descending order
b = np.sort(a)
b = b[::-1]
b
# Sort in descending order
c = np.sort(a)
np.flip(c)
# Sort in descending order
a[::-1].sort()
a
###Output
_____no_output_____
###Markdown
"N" Largest & Smallest Numbers in an Array
###Code
p = np.arange(0,50)
p
np.random.shuffle(p)
p
# Return "n" largest numbers in an Array
n = 4
p[np.argsort(p)[-nth:]]
# Return "n" largest numbers in an Array
p[np.argpartition(-p,n)[:n]]
# Return "n" smallest numbers in an Array
p[np.argsort(-p)[-n:]]
# Return "n" smallest numbers in an Array
p[np.argpartition(p,n)[:n]]
###Output
_____no_output_____
###Markdown
Repeating Sequences
###Code
a5 = [10,20,30]
a5
# Repeat whole array twice
np.tile(a5, 2)
# Repeat each element in an array thrice
np.repeat(a5, 3)
###Output
_____no_output_____
###Markdown
Compare Arrays
###Code
d1 = np.arange(0,10)
d1
d2 = np.arange(0,10)
d2
d3 = np.arange(10,20)
d3
d4 = d1[::-1]
d4
# Compare arrays using "allclose" function. If this function returns True then Arrays are equal
res1 = np.allclose(d1,d2)
res1
# Compare arrays using "allclose" function. If this function returns False then Arrays are not equal
res2 = np.allclose(d1,d3)
res2
# Compare arrays using "allclose" function.
res3 = np.allclose(d1,d4)
res3
###Output
_____no_output_____
###Markdown
Frequent Values in an Array
###Code
# unique numbers in an array
b = np.array([10,10,10,20,30,20,30,30,20,10,10,30,10])
np.unique(b)
# unique numbers in an array along with the count E.g value 10 occurred maximum times (5 times) in an array "b"
val , count = np.unique(b,return_counts=True)
val,count
# 10 is the most frequent value
np.bincount(b).argmax()
###Output
_____no_output_____
###Markdown
Read-Only Array
###Code
d5 = np.arange(10,100,10)
d5
# Make arrays immutable
d5.flags.writeable = False
d5[0] = 99
d5[2] = 11
###Output
_____no_output_____
###Markdown
Load & Save
###Code
# Load data from a text file using loadtext
p4 = np.loadtxt('sample.txt',
dtype = np.integer # Decides the datatype of resulting array
)
p4
# Load data from a text file using genfromtxt
p5 = np.genfromtxt('sample0.txt',dtype='str')
p5
# Accessing specific rows
p5[0]
# Accessing specific columns
p5[:,0]
p6 = np.genfromtxt('sample2.txt',
delimiter=' ',
dtype=None,
names=('Name', 'ID', 'Age')
)
p6
# Skip header using "skiprows" parameter
p6 = np.loadtxt('sample2.txt',
delimiter=' ',
dtype=[('Name', str, 50), ('ID', np.integer), ('Age', np.integer)],
skiprows=1
)
p6
# Return only first & third column using "usecols" parameter
np.loadtxt('sample.txt', delimiter =' ', usecols =(0, 2))
# Return only three rows using "max_rows" parameter
p6 = np.loadtxt('sample2.txt',
delimiter=' ',
dtype=[('Name', str, 50), ('ID', np.integer), ('Age', np.integer)],
skiprows=1,
max_rows = 3
)
p6
# Skip header using "skip_header" parameter
p6 = np.genfromtxt('sample2.txt',
delimiter=' ',
dtype=[('Name', str, 50), ('ID', np.integer), ('Age', np.float)],
names=('Name', 'ID', 'Age'),
skip_header=1
)
p6
p7 = np.arange(10,200,11)
p7
np.savetxt('test3.csv', p7, delimiter=',')
p8 = np.arange(0,121).reshape(11,11)
p8
np.save('test4.npy', p8)
p9 = np.load('test4.npy')
p9
np.save('numpyfile', p8)
p10 = np.load('numpyfile.npy')
p10
p11 = np.arange(0,1000000).reshape(1000,1000)
p11
# Save Numpy array to a compressed file
np.savez_compressed('test6.npz', p11)
# Save Numpy array to a npy file
np.save('test7.npy', p11)
# Compressed file size is much lesser than normal npy file
Image(filename='load_save.PNG')
###Output
_____no_output_____
###Markdown
Printing Options
###Code
# Display values upto 4 decimal place
np.set_printoptions(precision=4)
a = np.array([12.654398765 , 90.7864098354674])
a
# Display values upto 2 decimal place
np.set_printoptions(precision=2)
a = np.array([12.654398765 , 90.7864098354674])
a
# Array Summarization
np.set_printoptions(threshold=3)
np.arange(200)
# Reset Formatter
np.set_printoptions(precision=8,suppress=False, threshold=1000, formatter=None)
a = np.array([12.654398765 , 90.7864098354674])
a
np.arange(1,1100)
# Display all values
np.set_printoptions(threshold=np.inf)
np.arange(1,1100)
###Output
_____no_output_____
###Markdown
Vector Addition
###Code
v1 = np.array([1,2])
v2 = np.array([3,4])
v3 = v1+v2
v3 = np.add(v1,v2)
print('V3 =' ,v3)
###Output
V3 = [4 6]
###Markdown
Multiplication of vectors
###Code
a1 = [5 , 6 ,8]
a2 = [4, 7 , 9]
print(np.multiply(a1,a2))
###Output
[20 42 72]
###Markdown
Dot Product https://www.youtube.com/watch?v=WNuIhXo39_khttps://www.youtube.com/watch?v=LyGKycYT2v0
###Code
a1 = np.array([1,2,3])
a2 = np.array([4,5,6])
dotp = a1@a2
print(" Dot product - ",dotp)
dotp = np.dot(a1,a2)
print(" Dot product usign np.dot",dotp)
dotp = np.inner(a1,a2)
print(" Dot product usign np.inner", dotp)
dotp = sum(np.multiply(a1,a2))
print(" Dot product usign np.multiply & sum",dotp)
dotp = np.matmul(a1,a2)
print(" Dot product usign np.matmul",dotp)
dotp = 0
for i in range(len(a1)):
dotp = dotp + a1[i]*a2[i]
print(" Dot product usign for loop" , dotp)
###Output
Dot product - 32
Dot product usign np.dot 32
Dot product usign np.inner 32
Dot product usign np.multiply & sum 32
Dot product usign np.matmul 32
Dot product usign for loop 32
###Markdown
Length of Vector
###Code
v3 = np.array([1,2,3,4,5,6])
length = np.sqrt(np.dot(v3,v3))
length
v3 = np.array([1,2,3,4,5,6])
length = np.sqrt(sum(np.multiply(v3,v3)))
length
v3 = np.array([1,2,3,4,5,6])
length = np.sqrt(np.matmul(v3,v3))
length
###Output
_____no_output_____
###Markdown
Normalized Vector How to normalize a vector : https://www.youtube.com/watch?v=7fn03DIW3Ak
###Code
#First Method
v1 = [2,3]
length_v1 = np.sqrt(np.dot(v1,v1))
norm_v1 = v1/length_v1
length_v1 , norm_v1
#Second Method
v1 = [2,3]
norm_v1 = v1/np.linalg.norm(v1)
norm_v1
###Output
_____no_output_____
###Markdown
Angle between vectors
###Code
#First Method
v1 = np.array([8,4])
v2 = np.array([-4,8])
ang = np.rad2deg(np.arccos( np.dot(v1,v2) / (np.linalg.norm(v1)*np.linalg.norm(v2))))
ang
#Second Method
v1 = np.array([4,3])
v2 = np.array([-3,4])
lengthV1 = np.sqrt(np.dot(v1,v1))
lengthV2 = np.sqrt(np.dot(v2,v2))
ang = np.rad2deg(np.arccos( np.dot(v1,v2) / (lengthV1 * lengthV2)))
print('Angle between Vectors - %s' %ang)
###Output
Angle between Vectors - 90.0
###Markdown
Inner & outer products Inner and Outer Product :https://www.youtube.com/watch?v=FCmH4MqbFGs&t=2shttps://www.youtube.com/watch?v=FCmH4MqbFGs
###Code
v1 = np.array([1,2,3])
v2 = np.array([4,5,6])
np.inner(v1,v2)
print("\n Inner Product ==> \n", np.inner(v1,v2))
print("\n Outer Product ==> \n", np.outer(v1,v2))
###Output
Inner Product ==>
32
Outer Product ==>
[[ 4 5 6]
[ 8 10 12]
[12 15 18]]
###Markdown
Vector Cross Product
###Code
v1 = np.array([1,2,3])
v2 = np.array([4,5,6])
print("\nVector Cross Product ==> \n", np.cross(v1,v2))
###Output
Vector Cross Product ==>
[-3 6 -3]
###Markdown
Matrix Creation
###Code
# Create a 4x4 matrix
A = np.array([[1,2,3,4] , [5,6,7,8] , [10 , 11 , 12 ,13] , [14,15,16,17]])
A
# Datatype of Matrix
A.dtype
B = np.array([[1.5,2.07,3,4] , [5,6,7,8] , [10 , 11 , 12 ,13] , [14,15,16,17]])
B
# Datatype of Matrix
B.dtype
# Shape of Matrix
A.shape
# Generate a 4x4 zero matrix
np.zeros((4,4))
#Shape of Matrix
z1 = np.zeros((4,4))
z1.shape
# Generate a 5x5 matrix filled with ones
np.ones((5,5))
# Return 10x10 matrix of random integer numbers between 0 to 500
np.random.randint(0,500, (10,10))
arr2
arr2.reshape(5,4)
mat1 = np.random.randint(0,1000,100).reshape(10,10)
mat1
mat1[0,0]
mat1[mat1 > 500]
# Identity Matrix : https://en.wikipedia.org/wiki/Identity_matrix
I = np.eye(9)
I
# Diagonal Matrix : https://en.wikipedia.org/wiki/Diagonal_matrix
D = np.diag([1,2,3,4,5,6,7,8])
D
# Traingular Matrices (lower & Upper triangular matrix) : https://en.wikipedia.org/wiki/Triangular_matrix
M = np.random.randn(5,5)
U = np.triu(M)
L = np.tril(M)
print("lower triangular matrix - \n" , M)
print("\n")
print("lower triangular matrix - \n" , L)
print("\n")
print("Upper triangular matrix - \n" , U)
# Generate a 5X5 matrix with a given fill value of 8
np.full((5,5) , 8)
# Generate 5X5 matrix of Random float numbers between 10 to 20
np.random.uniform(10,20, size=(5,5))
A
# Collapse Matrix into one dimension array
A.flatten()
# Collapse Matrix into one dimension array
A.ravel()
###Output
_____no_output_____
###Markdown
Reading elements of a Matrix
###Code
A
# Fetch first row of matrix
A[0,]
# Fetch first column of matrix
A[:,0]
# Fetch first element of the matrix
A[0,0]
A[1:3 , 1:3]
###Output
_____no_output_____
###Markdown
Reverse Rows / Columns of a Matrix
###Code
arr = np.arange(16).reshape(4,4)
arr
# Reverse rows
arr[::-1]
#Reverse Columns
arr[:, ::-1]
###Output
_____no_output_____
###Markdown
SWAP Rows & Columns
###Code
m1 = np.arange(0,16).reshape(4,4)
m1
# SWAP rows 0 & 1
m1[[0,1]] = m1[[1,0]]
m1
# SWAP rows 2 & 3
m1[[3,2]] = m1[[2,3]]
m1
m2 = np.arange(0,36).reshape(6,6)
m2
# Swap columns 0 & 1
m2[:,[0, 1]] = m2[:,[1, 0]]
m2
# Swap columns 2 & 3
m2[:,[2, 3]] = m2[:,[3, 2]]
m2
###Output
_____no_output_____
###Markdown
Concatenate Matrices Matrix Concatenation : https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html
###Code
A = np.array([[1,2] , [3,4] ,[5,6]])
B = np.array([[1,1] , [1,1]])
C = np.concatenate((A,B))
C
###Output
_____no_output_____
###Markdown
Matrix Addition Matrix Addition : https://www.youtube.com/watch?v=ZCmVpGv6_1g
###Code
#********************************************************#
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n First Matrix (M) ==> \n", M)
print("\n Second Matrix (N) ==> \n", N)
C = M+N
print("\n Matrix Addition (M+N) ==> \n", C)
# OR
C = np.add(M,N,dtype = np.float64)
print("\n Matrix Addition using np.add ==> \n", C)
#********************************************************#
###Output
First Matrix (M) ==>
[[ 1 2 3]
[ 4 -3 6]
[ 7 8 0]]
Second Matrix (N) ==>
[[1 1 1]
[2 2 2]
[3 3 3]]
Matrix Addition (M+N) ==>
[[ 2 3 4]
[ 6 -1 8]
[10 11 3]]
Matrix Addition using np.add ==>
[[ 2. 3. 4.]
[ 6. -1. 8.]
[10. 11. 3.]]
###Markdown
Matrix subtraction Matrix subtraction : https://www.youtube.com/watch?v=7jb_AO_hRc8&list=PLmdFyQYShrjcoVkhCCIwxNj9N4rW1-T5I&index=8
###Code
#********************************************************#
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n First Matrix (M) ==> \n", M)
print("\n Second Matrix (N) ==> \n", N)
C = M-N
print("\n Matrix Subtraction (M-N) ==> \n", C)
# OR
C = np.subtract(M,N,dtype = np.float64)
print("\n Matrix Subtraction using np.subtract ==> \n", C)
#********************************************************#
###Output
First Matrix (M) ==>
[[ 1 2 3]
[ 4 -3 6]
[ 7 8 0]]
Second Matrix (N) ==>
[[1 1 1]
[2 2 2]
[3 3 3]]
Matrix Subtraction (M-N) ==>
[[ 0 1 2]
[ 2 -5 4]
[ 4 5 -3]]
Matrix Subtraction using np.subtract ==>
[[ 0. 1. 2.]
[ 2. -5. 4.]
[ 4. 5. -3.]]
###Markdown
Matrices Scalar Multiplication Matrices Scalar Multiplication : https://www.youtube.com/watch?v=4lHyTQH1iS8&list=PLmdFyQYShrjcoVkhCCIwxNj9N4rW1-T5I&index=9
###Code
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
C = 10
print("\n Matrix (M) ==> \n", M)
print("\nMatrices Scalar Multiplication ==> \n", C*M)
# OR
print("\nMatrices Scalar Multiplication ==> \n", np.multiply(C,M))
###Output
Matrix (M) ==>
[[ 1 2 3]
[ 4 -3 6]
[ 7 8 0]]
Matrices Scalar Multiplication ==>
[[ 10 20 30]
[ 40 -30 60]
[ 70 80 0]]
Matrices Scalar Multiplication ==>
[[ 10 20 30]
[ 40 -30 60]
[ 70 80 0]]
###Markdown
Transpose of a matrix Transpose of a matrix : https://www.youtube.com/watch?v=g_Rz94DXvNo&list=PLmdFyQYShrjcoVkhCCIwxNj9N4rW1-T5I&index=13
###Code
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
print("\n Matrix (M) ==> \n", M)
print("\nTranspose of M ==> \n", np.transpose(M))
# OR
print("\nTranspose of M ==> \n", M.T)
###Output
Matrix (M) ==>
[[ 1 2 3]
[ 4 -3 6]
[ 7 8 0]]
Transpose of M ==>
[[ 1 4 7]
[ 2 -3 8]
[ 3 6 0]]
Transpose of M ==>
[[ 1 4 7]
[ 2 -3 8]
[ 3 6 0]]
###Markdown
Determinant of a matrix Determinant of a matrix :https://www.youtube.com/watch?v=21LWuY8i6Hw&t=88s https://www.youtube.com/watch?v=Ip3X9LOh2dk&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&index=6
###Code
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
print("\n Matrix (M) ==> \n", M)
print("\nDeterminant of M ==> ", np.linalg.det(M))
###Output
Matrix (M) ==>
[[ 1 2 3]
[ 4 -3 6]
[ 7 8 0]]
Determinant of M ==> 195.0
###Markdown
Rank of a matrix
###Code
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
print("\n Matrix (M) ==> \n", M)
print("\nRank of M ==> ", np.linalg.matrix_rank(M))
###Output
Matrix (M) ==>
[[ 1 2 3]
[ 4 -3 6]
[ 7 8 0]]
Rank of M ==> 3
###Markdown
Trace of matrix
###Code
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
print("\n Matrix (M) ==> \n", M)
print("\nTrace of M ==> ", np.trace(M))
###Output
Matrix (M) ==>
[[ 1 2 3]
[ 4 -3 6]
[ 7 8 0]]
Trace of M ==> -2
###Markdown
Inverse of matrix A Inverse of matrix : https://www.youtube.com/watch?v=pKZyszzmyeQ
###Code
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
print("\n Matrix (M) ==> \n", M)
print("\nInverse of M ==> \n", np.linalg.inv(M))
###Output
Matrix (M) ==>
[[ 1 2 3]
[ 4 -3 6]
[ 7 8 0]]
Inverse of M ==>
[[-0.24615385 0.12307692 0.10769231]
[ 0.21538462 -0.10769231 0.03076923]
[ 0.27179487 0.03076923 -0.05641026]]
###Markdown
Matrix Multiplication (pointwise multiplication)
###Code
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n First Matrix (M) ==> \n", M)
print("\n Second Matrix (N) ==> \n", N)
print("\n Point-Wise Multiplication of M & N ==> \n", M*N)
# OR
print("\n Point-Wise Multiplication of M & N ==> \n", np.multiply(M,N))
###Output
First Matrix (M) ==>
[[ 1 2 3]
[ 4 -3 6]
[ 7 8 0]]
Second Matrix (N) ==>
[[1 1 1]
[2 2 2]
[3 3 3]]
Point-Wise Multiplication of M & N ==>
[[ 1 2 3]
[ 8 -6 12]
[21 24 0]]
Point-Wise Multiplication of M & N ==>
[[ 1 2 3]
[ 8 -6 12]
[21 24 0]]
###Markdown
Matrix dot product Matrix Multiplication :https://www.youtube.com/watch?v=vzt9c7iWPxs&t=207shttps://www.youtube.com/watch?v=XkY2DOUCWMU&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&index=4
###Code
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n First Matrix (M) ==> \n", M)
print("\n Second Matrix (N) ==> \n", N)
print("\n Matrix Dot Product ==> \n", M@N)
# OR
print("\n Matrix Dot Product using np.matmul ==> \n", np.matmul(M,N))
# OR
print("\n Matrix Dot Product using np.dot ==> \n", np.dot(M,N))
###Output
First Matrix (M) ==>
[[ 1 2 3]
[ 4 -3 6]
[ 7 8 0]]
Second Matrix (N) ==>
[[1 1 1]
[2 2 2]
[3 3 3]]
Matrix Dot Product ==>
[[14 14 14]
[16 16 16]
[23 23 23]]
Matrix Dot Product using np.matmul ==>
[[14 14 14]
[16 16 16]
[23 23 23]]
Matrix Dot Product using np.dot ==>
[[14 14 14]
[16 16 16]
[23 23 23]]
###Markdown
Matrix Division
###Code
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n First Matrix (M) ==> \n", M)
print("\n Second Matrix (N) ==> \n", N)
print("\n Matrix Division (M/N) ==> \n", M/N)
# OR
print("\n Matrix Division (M/N) ==> \n", np.divide(M,N))
###Output
First Matrix (M) ==>
[[ 1 2 3]
[ 4 -3 6]
[ 7 8 0]]
Second Matrix (N) ==>
[[1 1 1]
[2 2 2]
[3 3 3]]
Matrix Division (M/N) ==>
[[ 1. 2. 3. ]
[ 2. -1.5 3. ]
[ 2.33333333 2.66666667 0. ]]
Matrix Division (M/N) ==>
[[ 1. 2. 3. ]
[ 2. -1.5 3. ]
[ 2.33333333 2.66666667 0. ]]
###Markdown
Sum of all elements in a matrix
###Code
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n Matrix (N) ==> \n", N)
print ("Sum of all elements in a Matrix ==>")
print (np.sum(N))
###Output
Matrix (N) ==>
[[1 1 1]
[2 2 2]
[3 3 3]]
Sum of all elements in a Matrix ==>
18
###Markdown
Column-Wise Addition
###Code
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n Matrix (N) ==> \n", N)
print ("Column-Wise summation ==> ")
print (np.sum(N,axis=0))
###Output
Matrix (N) ==>
[[1 1 1]
[2 2 2]
[3 3 3]]
Column-Wise summation ==>
[6 6 6]
###Markdown
Row-Wise Addition
###Code
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n Matrix (N) ==> \n", N)
print ("Row-Wise summation ==>")
print (np.sum(N,axis=1))
###Output
Matrix (N) ==>
[[1 1 1]
[2 2 2]
[3 3 3]]
Row-Wise summation ==>
[3 6 9]
###Markdown
Kronecker Product of matrices Kronecker Product of matrices : https://www.youtube.com/watch?v=e1UJXvu8VZk
###Code
M1 = np.array([[1,2,3] , [4,5,6]])
M1
M2 = np.array([[10,10,10],[10,10,10]])
M2
np.kron(M1,M2)
###Output
_____no_output_____
###Markdown
Matrix Powers
###Code
M1 = np.array([[1,2],[4,5]])
M1
#Matrix to the power 3
M1@M1@M1
#Matrix to the power 3
np.linalg.matrix_power(M1,3)
###Output
_____no_output_____
###Markdown
Tensor What is Tensor : - https://www.youtube.com/watch?v=f5liqUk0ZTw - https://www.youtube.com/watch?v=bpG3gqDM80w&t=634s - https://www.youtube.com/watch?v=uaQeXi4E7gA
###Code
# Create Tensor
T1 = np.array([
[[1,2,3], [4,5,6], [7,8,9]],
[[10,20,30], [40,50,60], [70,80,90]],
[[100,200,300], [400,500,600], [700,800,900]],
])
T1
T2 = np.array([
[[0,0,0] , [0,0,0] , [0,0,0]],
[[1,1,1] , [1,1,1] , [1,1,1]],
[[2,2,2] , [2,2,2] , [2,2,2]]
])
T2
###Output
_____no_output_____
###Markdown
Tensor Addition
###Code
A = T1+T2
A
np.add(T1,T2)
###Output
_____no_output_____
###Markdown
Tensor Subtraction
###Code
S = T1-T2
S
np.subtract(T1,T2)
###Output
_____no_output_____
###Markdown
Tensor Element-Wise Product
###Code
P = T1*T2
P
np.multiply(T1,T2)
###Output
_____no_output_____
###Markdown
Tensor Element-Wise Division
###Code
D = T1/T2
D
np.divide(T1,T2)
###Output
C:\Anaconda\lib\site-packages\ipykernel_launcher.py:1: RuntimeWarning: divide by zero encountered in true_divide
"""Entry point for launching an IPython kernel.
###Markdown
Tensor Dot Product
###Code
T1
T2
np.tensordot(T1,T2)
###Output
_____no_output_____
###Markdown
Solving Equations $$AX = B$$ Solving Equations : - https://www.youtube.com/watch?v=NNmiOoWt86M - https://www.youtube.com/watch?v=a2z7sZ4MSqo
###Code
A = np.array([[1,2,3] , [4,5,6] , [7,8,9]])
A
B = np.random.random((3,1))
B
# Ist Method
X = np.dot(np.linalg.inv(A) , B)
X
# 2nd Method
X = np.matmul(np.linalg.inv(A) , B)
X
# 3rd Method
X = np.linalg.inv(A)@B
X
# 4th Method
X = np.linalg.solve(A,B)
X
###Output
_____no_output_____ |
Taller_semana_3.ipynb | ###Markdown
Taller semana 3 Ejercicio 1:Cargue los datos del archivo ```pokemon_data.csv``` a un dataframe, cree una columna que se llame interes asignándole el valor False, luego cambie el valor de esta columna solo para los pokemon que sean legendarios (Legendary = True) y que el Type 1 sea fuego.
###Code
import pandas as pd
# Escriba su código aquí
url = 'https://raw.githubusercontent.com/AnaVargasJ/pandas/main/pokemon_data%20(1).csv'
pk = pd.read_csv(url)
pk[ 'interes'] = False
pk.loc[pk["Type 1"] == "Fire", "interes"] == True
pk.loc[pk["Legendary"] == "True", "interes"] == True
pk
###Output
_____no_output_____
###Markdown
Ejercicio 2Con los datos de pokemones anteriores, escriba un código para sacar el promedio de la columna Attack agrupado por Type 1.
###Code
# Escriba su código aquí
import pandas as pd
url = 'https://raw.githubusercontent.com/AnaVargasJ/pandas/main/pokemon_data%20(1).csv'
pk = pd.read_csv(url)
print(pk['Type 1'])
print("el promedio de Attack es: ", pk['Attack'].mean())
###Output
0 Grass
1 Grass
2 Grass
3 Grass
4 Fire
...
795 Rock
796 Rock
797 Psychic
798 Psychic
799 Fire
Name: Type 1, Length: 800, dtype: object
el promedio de Attack es: 79.00125
###Markdown
Ejercicio 3Se le entregan los datos de movimientos de acciones en el archivo ```DUK.csv```, y se le pide que entregue el promedio de Open del 2015, es decir entre las fechas 2015-01-01 y 2015-12-31
###Code
import pandas as pd
url = 'https://raw.githubusercontent.com/AnaVargasJ/pandas/main/DUK.csv'
df = pd.read_csv(url)
uploaded = files.upload
dk.head()
#agrupamiento entre rangos de fecha
mask = (dk['Date'] > '2015-01-01') & (dk['Date'] <= '2015-12-31') #
#promedio
dk.loc[mask, 'Open'].mean()
###Output
_____no_output_____
###Markdown
Taller semana 3 Ejercicio 1:Cargue los datos del archivo ```pokemon_data.csv``` a un dataframe, cree una columna que se llame interes asignándole el valor False, luego cambie el valor de esta columna solo para los pokemon que sean legendarios (Legendary = True) y que el Type 1 sea fuego.
###Code
import pandas as pd
# Escriba su código aquí
from google.colab import files
uploaded = files.upload()
import io
df =pd.read_csv(io.BytesIO(uploaded['pokemon_data.csv']), sep=",")
df.head(800)
pk_type=df['Type 1'].unique
df['Interes']=False
df.head(800)
df = df[(df['Type 1']=='Fire')& (df['Legendary']==True)]
df.head(800)
df= df[(df['Type 1']=='Fire')& (df['Legendary']==True)]
mask= df['Type 1']== 'Fire'
df.loc[mask,'Interes']=True
df.head(800)
###Output
_____no_output_____
###Markdown
Ejercicio 2Con los datos de pokemones anteriores, escriba un código para sacar el promedio de la columna Attack agrupado por Type 1.
###Code
# Escriba su código aquí
import io
df =pd.read_csv(io.BytesIO(uploaded['pokemon_data.csv']), sep=",")
df.head(800)
print(df['Type 1'])
df.groupby(['Type 1'])
print("el promedio de Attack es: ", df['Attack'].mean())
df
###Output
0 Grass
1 Grass
2 Grass
3 Grass
4 Fire
...
795 Rock
796 Rock
797 Psychic
798 Psychic
799 Fire
Name: Type 1, Length: 800, dtype: object
el promedio de Attack es: 79.00125
###Markdown
Ejercicio 3Se le entregan los datos de movimientos de acciones en el archivo ```DUK.csv```, y se le pide que entregue el promedio de Open del 2015, es decir entre las fechas 2015-01-01 y 2015-12-31
###Code
# Escriba su código aquí
import pandas as pd
from google.colab import files
uploaded = files.upload()
import io
df =pd.read_csv(io.BytesIO(uploaded['DUK.csv']), sep=",")
df.head(1259)
df = df[(df['Date']>='2015-01-01')& (df['Date']<='2015-12-31')]
df.iloc[0:1259]
df.iloc[:, 0:2]
p=df.Open.mean()
print('el promedio de open entre las fechas 2015-01-01 y 2015-12-31 es: ',round(p,5))
###Output
el promedio de open entre las fechas 2015-01-01 y 2015-12-31 es: 74.81377
###Markdown
Taller semana 3 Ejercicio 1:Cargue los datos del archivo ```pokemon_data.csv``` a un dataframe, cree una columna que se llame interes asignándole el valor False, luego cambie el valor de esta columna solo para los pokemon que sean legendarios (Legendary = True) y que el Type 1 sea fuego.
###Code
import pandas as pd
import pandas as np
from google.colab import files
uploaded = files.upload()
pk = pd.read_csv('pokemon_data (1).csv',sep=',')
pk['Interes'] = False
pk.head(10)
fire = pk.loc[(pk["Type 1"] == "Fire") & (pk["Legendary"]== True)]
fire
fire1 = fire.assign(Interes= True)
fire1
###Output
_____no_output_____
###Markdown
Ejercicio 2Con los datos de pokemones anteriores, escriba un código para sacar el promedio de la columna Attack agrupado por Type 1.
###Code
pk_Type_Attack = pk[["Type 1","Attack"]]
pk_promedio = pk_Type_Attack.groupby("Type 1",as_index=False).mean()
pk_promedio
###Output
_____no_output_____
###Markdown
Ejercicio 3Se le entregan los datos de movimientos de acciones en el archivo ```DUK.csv```, y se le pide que entregue el promedio de Open del 2015, es decir entre las fechas 2015-01-01 y 2015-12-31
###Code
import datetime
from google.colab import files
uploaded = files.upload()
duk = pd.read_csv("DUK.csv",sep=",")
duk.head()
dates = duk.loc[(duk["Date"].between ("2015-01-01" , "2015-12-31"))]
dates.head(10)
dates["Open"].mean()
###Output
_____no_output_____
###Markdown
Taller semana 3 Ejercicio 1:Cargue los datos del archivo ```pokemon_data.csv``` a un dataframe, cree una columna que se llame interes asignándole el valor False, luego cambie el valor de esta columna solo para los pokemon que sean legendarios (Legendary = True) y que el Type 1 sea fuego.
###Code
import pandas as pd
# Escriba su código
from google.colab import files
uploaded = files.upload()
import os
print(os.getcwd())
pk=pd.read_csv('pokemon_data.csv',sep=",")
pk_type=pk['Type 1'].unique()
pk['interes']=False
pk.head(800)
pk.loc[pk['Type 1'] == "Fire",:]
pk['Type 1'].unique ()
pk['Legendary'].unique()
pk.loc[mask,'Legendary']= True
pk.head(20)
pk.loc[(pk['Type 1'] == 'Fire') & (pk['Legendary'] == True),'interes']=True
pk.head()
###Output
_____no_output_____
###Markdown
Ejercicio 2Con los datos de pokemones anteriores, escriba un código para sacar el promedio de la columna Attack agrupado por Type 1.
###Code
pk3=pk.loc[pk['Type 1'] == "Fire",:]
pk3
import pandas as pd
promedio=pk3
print(promedio['Attack'].mean())
pk.groupby(['Type 1']).mean()
###Output
_____no_output_____
###Markdown
Ejercicio 3Se le entregan los datos de movimientos de acciones en el archivo ```DUK.csv```, y se le pide que entregue el promedio de Open del 2015, es decir entre las fechas 2015-01-01 y 2015-12-31
###Code
# Escriba su código aquí
import pandas as pd
pk4=pd.read_csv('DUK.csv')
pk4.head()
date=pk4[(pk4['Date'] > '2014-12-31') & (pk4['Date'] < '2016-01-01')]
date
import pandas as pd
print(date['Open'].mean())
###Output
74.8137697698413
###Markdown
Taller semana 3 Ejercicio 1:Cargue los datos del archivo ```pokemon_data.csv``` a un dataframe, cree una columna que se llame interes asignándole el valor False, luego cambie el valor de esta columna solo para los pokemon que sean legendarios (Legendary = True) y que el Type 1 sea fuego. Nueva sección
###Code
import pandas as pd
# Escriba su código aquí
url = 'https://raw.githubusercontent.com/AngieCat26/MujeresDigitales/main/pokemon_data.csv'
pk = pd.read_csv(url)
pk[ 'interes'] = False
pk.loc[pk["Legendary"] == "True", "interes"] == True
pk.loc[pk["Type 1"] == "Fire", "interes"] == True
pk
###Output
_____no_output_____
###Markdown
Nueva sección Ejercicio 2Con los datos de pokemones anteriores, escriba un código para sacar el promedio de la columna Attack agrupado por Type 1.
###Code
# Escriba su código aquí
import pandas as pd
url = 'https://raw.githubusercontent.com/AngieCat26/MujeresDigitales/main/pokemon_data.csv'
pk = pd.read_csv(url)
print(pk['Type 1'])
pk.groupby(['Type 1'])
print("el promedio de Attack es: ", pk['Attack'].mean())
pk
###Output
0 Grass
1 Grass
2 Grass
3 Grass
4 Fire
...
795 Rock
796 Rock
797 Psychic
798 Psychic
799 Fire
Name: Type 1, Length: 800, dtype: object
el promedio de Attack es: 79.00125
###Markdown
Ejercicio 3Se le entregan los datos de movimientos de acciones en el archivo ```DUK.csv```, y se le pide que entregue el promedio de Open del 2015, es decir entre las fechas 2015-01-01 y 2015-12-31
###Code
# Escriba su código aquí
import pandas as pd
url = 'https://raw.githubusercontent.com/AngieCat26/MujeresDigitales/main/DUK%20(1).csv'
duk = pd.read_csv(url)
duk.groupby(['Date']).mean()
###Output
_____no_output_____
###Markdown
Ejercicio 3Se le entregan los datos de movimientos de acciones en el archivo ```DUK.csv```, y se le pide que entregue el promedio de Open del 2015, es decir entre las fechas 2015-01-01 y 2015-12-31
###Code
dk = pd.read_csv('DUK.csv')
#dk.head(15)
dk[dk.Date == 2015]
open2015 = dk[(dk.Date == 2015) & (dk.Date == 2016)]
print(open2015)
###Output
Empty DataFrame
Columns: [Date, Open, High, Low, Close, Adj Close, Volume]
Index: []
###Markdown
Taller semana 3 Ejercicio 1:Cargue los datos del archivo ```pokemon_data.csv``` a un dataframe, cree una columna que se llame interes asignándole el valor False, luego cambie el valor de esta columna solo para los pokemon que sean legendarios (Legendary = True) y que el Type 1 sea fuego.
###Code
import pandas as pd
pk = pd.read_csv('pokemon_data.csv')
pk['interes'] = False
mask1 = pk['Type 1'] == 'Fire'
pk.loc[mask1, 'Legendary'] = True
pk.loc[pk['Legendary'] == True]
###Output
_____no_output_____
###Markdown
Ejercicio 2Con los datos de pokemones anteriores, escriba un código para sacar el promedio de la columna Attack agrupado por Type 1.
###Code
# Se agrupan los tipo 1 = 'fire' y la columna attack
T1 = pk.groupby(mask1)['Attack']
#Se saca la media (que es igual al promedio) de la variable
#El promedio que necesitamos es el de True, debido que es el que cumple la condición de mask1
T1.mean()
###Output
_____no_output_____
###Markdown
Taller semana 3 Ejercicio 1:Cargue los datos del archivo ```pokemon_data.csv``` a un dataframe, cree una columna que se llame interes asignándole el valor False, luego cambie el valor de esta columna solo para los pokemon que sean legendarios (Legendary = True) y que el Type 1 sea fuego.
###Code
import pandas as pd
#CARGAR ARCHIVO
pokemon = pd.read_csv('pokemon_data.csv',sep=",")
pokemon
#CREAR COLUMNA--->interes Y LE ASIGNA EL VALOR DE False
pokemon['interes']='False'
pokemon
#CONSULTAR VALORES A POKEMONES legendary=True Y Type1=Fire QUE TIENEN EN interes=False
a=pokemon.loc[(pokemon['Legendary']== True) & (pokemon['Type 1'] =='Fire')]
a
#CAMBIAR VALOR A POKEMONES legendary=True Y Type1=Fire
pokemon.loc[((pokemon['Legendary']== True) & (pokemon['Type 1'] =='Fire')), ['interes'] ]= True
pokemon
#CONSULTAR LOS DATOS DE interes =True
pokemon.loc[(pokemon['Legendary']== True) & (pokemon['Type 1'] =='Fire')]
###Output
_____no_output_____
###Markdown
Ejercicio 2Con los datos de pokemones anteriores, escriba un código para sacar el promedio de la columna Attack agrupado por Type 1.
###Code
#AGRUPACIÓN DE LOS DATOS POR Type 1
a=pokemon.groupby(['Type 1'])
#PROMEDIO DE LOS VALORES DE LA COLUMNA Attack SEGUN LA CLASIFICACION DE type 1
b=a['Attack'].mean()
print("El promedio de los datos Attack agrupado por Type 1 es: ","\n",b)
###Output
El promedio de los datos Attack agrupado por Type 1 es:
Type 1
Bug 70.971014
Dark 88.387097
Dragon 112.125000
Electric 69.090909
Fairy 61.529412
Fighting 96.777778
Fire 84.769231
Flying 78.750000
Ghost 73.781250
Grass 73.214286
Ground 95.750000
Ice 72.750000
Normal 73.469388
Poison 74.678571
Psychic 71.456140
Rock 92.863636
Steel 92.703704
Water 74.151786
Name: Attack, dtype: float64
###Markdown
Ejercicio 3Se le entregan los datos de movimientos de acciones en el archivo ```DUK.csv```, y se le pide que entregue el promedio de Open del 2015, es decir entre las fechas 2015-01-01 y 2015-12-31
###Code
#CARGAR ARCHIVO
duk = pd.read_csv('DUK.csv',sep=",")
duk
#CONVERTIR LOS DATOS DE DATE DE STRING A FECHA
duk['Date'] = pd.to_datetime(duk['Date'])
#print(open.dtypes)
#AGRUPA LOS DATOS DEL AÑO 2015
anual=duk[ (duk['Date'] >= '2015-01-01') & (duk['Date'] <= '2015-12-31')]
#print(anual)
#PROMEDIO DE LA COLUMNA OPEN PARA EL AÑO 2015
promedio=anual['Open'].mean()
print("El promedio de OPEN 2015 es: ",promedio)
###Output
El promedio de OPEN 2015 es: 74.8137697698413
###Markdown
Taller semana 3 Gisela criollo suarez Ejercicio 1:Cargue los datos del archivo ```pokemon_data.csv``` a un dataframe, cree una columna que se llame interes asignándole el valor False, luego cambie el valor de esta columna solo para los pokemon que sean legendarios (Legendary = True) y que el Type 1 sea fuego.
###Code
import pandas as pd
# Escriba su código aquí
import os
print(os.getcwd())
#traemos el archivo que queremos leer
pk = pd.read_csv('pokemon_data.csv')
#creamos la nueva columna
df = pd.DataFrame (pk)
df['Interes']=False
print(df)
#traer los pokemones de tipo fuego y legendarios
df.loc[(df['Type 1'] == 'Fire') & (df['Legendary']==True)]
#aqui se ve que el interes es false
#cambiar los valor del Interes de los pokemones type fire y legendarios.
df.loc[(df['Type 1'] == 'Fire') & (df['Legendary']==True), 'Interes'] =True
#traemos de nuevo los pokemones, esta vez se ve que Interes es True
df.loc[(df['Type 1'] == 'Fire') & (df['Legendary']==True)]
df
###Output
_____no_output_____
###Markdown
Ejercicio 2Con los datos de pokemones anteriores, escriba un código para sacar el promedio de la columna Attack agrupado por Type 1.
###Code
# Escriba su código aquí
df_type= pk['Type 1'].unique()
df_type
#promedio de Attack Type Fire
prom=df.loc[df['Type 1'] == 'Grass','Attack'].mean()
prom
#promedio de Attack Type Fire
prom=df.loc[df['Type 1'] == 'Fire','Attack'].mean()
prom
#promedio de Attack Type Water
prom=df.loc[df['Type 1'] == 'Water','Attack'].mean()
prom
#promedio de Attack Type Bug
prom=df.loc[df['Type 1'] == 'Bug','Attack'].mean()
prom
#promedio de Attack Type Normal
prom=df.loc[df['Type 1'] == 'Normal','Attack'].mean()
prom
#promedio de Attack Type Poison
prom=df.loc[df['Type 1'] == 'Poison','Attack'].mean()
prom
#promedio de Attack Type Electric
prom=df.loc[df['Type 1'] == 'Electric','Attack'].mean()
prom
#promedio de Attack Type Ground
prom=df.loc[df['Type 1'] == 'Ground','Attack'].mean()
prom
#promedio de Attack Type Fairy
prom=df.loc[df['Type 1'] == 'Fairy','Attack'].mean()
prom
#promedio de Attack Type Fighting
prom=df.loc[df['Type 1'] == 'Fighting','Attack'].mean()
prom
#promedio de Attack Type Psychic
prom=df.loc[df['Type 1'] == 'Psychic','Attack'].mean()
prom
#promedio de Attack Type Rock
prom=df.loc[df['Type 1'] == 'Rock','Attack'].mean()
prom
#promedio de Attack Type Ghost
prom=df.loc[df['Type 1'] == 'Ghost','Attack'].mean()
prom
#promedio de Attack Type Ice
prom=df.loc[df['Type 1'] == 'Ice','Attack'].mean()
prom
#promedio de Attack Type Dragon
prom=df.loc[df['Type 1'] == 'Dragon','Attack'].mean()
prom
#promedio de Attack Type Dark
prom=df.loc[df['Type 1'] == 'Dark','Attack'].mean()
prom
#promedio de Attack Type Steel
prom=df.loc[df['Type 1'] == 'Steel','Attack'].mean()
prom
#promedio de Attack Type Flying
prom=df.loc[df['Type 1'] == 'Flying','Attack'].mean()
prom
###Output
_____no_output_____
###Markdown
Ejercicio 3Se le entregan los datos de movimientos de acciones en el archivo ```DUK.csv```, y se le pide que entregue el promedio de Open del 2015, es decir entre las fechas 2015-01-01 y 2015-12-31
###Code
# Escriba su código aquí
#traemos el archivo que queremos leer
duk = pd.read_csv('DUK.csv')
duk
#filtrar fechas
duk.loc[(duk['Date']>= '2015-01-01') &(duk['Date'] <= '2015-12-31')]
#para traer el promedio entre las fechas solicitadas del Open, lo agregamos al final y ponemos el .mean()
duk.loc[(duk['Date']>= '2015-01-01') &(duk['Date'] <= '2015-12-31'), 'Open'].mean()
###Output
_____no_output_____ |
courses/dl2/wgan.ipynb | ###Markdown
WGAN
###Code
from fastai.conv_learner import *
from fastai.dataset import *
import gzip
torch.cuda.set_device(3)
###Output
_____no_output_____
###Markdown
Download the LSUN scene classification dataset bedroom category, unzip it, and convert it to jpg files (the scripts folder is here in the `dl2` folder):```curl 'http://lsun.cs.princeton.edu/htbin/download.cgi?tag=latest&category=bedroom&set=train' -o bedroom.zipunzip bedroom.zippip install lmdbpython lsun-data.py {PATH}/bedroom_train_lmdb --out_dir {PATH}/bedroom```This isn't tested on Windows - if it doesn't work, you could use a Linux box to convert the files, then copy them over. Alternatively, you can download [this 20% sample](https://www.kaggle.com/jhoward/lsun_bedroom) from Kaggle datasets.
###Code
PATH = Path('data/lsun/')
IMG_PATH = PATH/'bedroom'
CSV_PATH = PATH/'files.csv'
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
files = PATH.glob('bedroom/**/*.jpg')
with CSV_PATH.open('w') as fo:
for f in files: fo.write(f'{f.relative_to(IMG_PATH)},0\n')
# Optional - sampling a subset of files
CSV_PATH = PATH/'files_sample.csv'
files = PATH.glob('bedroom/**/*.jpg')
with CSV_PATH.open('w') as fo:
for f in files:
if random.random()<0.1: fo.write(f'{f.relative_to(IMG_PATH)},0\n')
class ConvBlock(nn.Module):
def __init__(self, ni, no, ks, stride, bn=True, pad=None):
super().__init__()
if pad is None: pad = ks//2//stride
self.conv = nn.Conv2d(ni, no, ks, stride, padding=pad, bias=False)
self.bn = nn.BatchNorm2d(no) if bn else None
self.relu = nn.LeakyReLU(0.2, inplace=True)
def forward(self, x):
x = self.relu(self.conv(x))
return self.bn(x) if self.bn else x
class DCGAN_D(nn.Module):
def __init__(self, isize, nc, ndf, n_extra_layers=0):
super().__init__()
assert isize % 16 == 0, "isize has to be a multiple of 16"
self.initial = ConvBlock(nc, ndf, 4, 2, bn=False)
csize,cndf = isize/2,ndf
self.extra = nn.Sequential(*[ConvBlock(cndf, cndf, 3, 1)
for t in range(n_extra_layers)])
pyr_layers = []
while csize > 4:
pyr_layers.append(ConvBlock(cndf, cndf*2, 4, 2))
cndf *= 2; csize /= 2
self.pyramid = nn.Sequential(*pyr_layers)
self.final = nn.Conv2d(cndf, 1, 4, padding=0, bias=False)
def forward(self, input):
x = self.initial(input)
x = self.extra(x)
x = self.pyramid(x)
return self.final(x).mean(0).view(1)
class DeconvBlock(nn.Module):
def __init__(self, ni, no, ks, stride, pad, bn=True):
super().__init__()
self.conv = nn.ConvTranspose2d(ni, no, ks, stride, padding=pad, bias=False)
self.bn = nn.BatchNorm2d(no)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
x = self.relu(self.conv(x))
return self.bn(x) if self.bn else x
class DCGAN_G(nn.Module):
def __init__(self, isize, nz, nc, ngf, n_extra_layers=0):
super().__init__()
assert isize % 16 == 0, "isize has to be a multiple of 16"
cngf, tisize = ngf//2, 4
while tisize!=isize: cngf*=2; tisize*=2
layers = [DeconvBlock(nz, cngf, 4, 1, 0)]
csize, cndf = 4, cngf
while csize < isize//2:
layers.append(DeconvBlock(cngf, cngf//2, 4, 2, 1))
cngf //= 2; csize *= 2
layers += [DeconvBlock(cngf, cngf, 3, 1, 1) for t in range(n_extra_layers)]
layers.append(nn.ConvTranspose2d(cngf, nc, 4, 2, 1, bias=False))
self.features = nn.Sequential(*layers)
def forward(self, input): return F.tanh(self.features(input))
bs,sz,nz = 64,64,100
tfms = tfms_from_stats(inception_stats, sz)
md = ImageClassifierData.from_csv(PATH, 'bedroom', CSV_PATH, tfms=tfms, bs=128,
skip_header=False, continuous=True)
md = md.resize(128)
x,_ = next(iter(md.val_dl))
plt.imshow(md.trn_ds.denorm(x)[0]);
netG = DCGAN_G(sz, nz, 3, 64, 1).cuda()
netD = DCGAN_D(sz, 3, 64, 1).cuda()
def create_noise(b): return V(torch.zeros(b, nz, 1, 1).normal_(0, 1))
preds = netG(create_noise(4))
pred_ims = md.trn_ds.denorm(preds)
fig, axes = plt.subplots(2, 2, figsize=(6, 6))
for i,ax in enumerate(axes.flat): ax.imshow(pred_ims[i])
def gallery(x, nc=3):
n,h,w,c = x.shape
nr = n//nc
assert n == nr*nc
return (x.reshape(nr, nc, h, w, c)
.swapaxes(1,2)
.reshape(h*nr, w*nc, c))
optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-4)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-4)
def train(niter, first=True):
gen_iterations = 0
for epoch in trange(niter):
netD.train(); netG.train()
data_iter = iter(md.trn_dl)
i,n = 0,len(md.trn_dl)
with tqdm(total=n) as pbar:
while i < n:
set_trainable(netD, True)
set_trainable(netG, False)
d_iters = 100 if (first and (gen_iterations < 25) or (gen_iterations % 500 == 0)) else 5
j = 0
while (j < d_iters) and (i < n):
j += 1; i += 1
for p in netD.parameters(): p.data.clamp_(-0.01, 0.01)
real = V(next(data_iter)[0])
real_loss = netD(real)
fake = netG(create_noise(real.size(0)))
fake_loss = netD(V(fake.data))
netD.zero_grad()
lossD = real_loss-fake_loss
lossD.backward()
optimizerD.step()
pbar.update()
set_trainable(netD, False)
set_trainable(netG, True)
netG.zero_grad()
lossG = netD(netG(create_noise(bs))).mean(0).view(1)
lossG.backward()
optimizerG.step()
gen_iterations += 1
print(f'Loss_D {to_np(lossD)}; Loss_G {to_np(lossG)}; '
f'D_real {to_np(real_loss)}; Loss_D_fake {to_np(fake_loss)}')
torch.backends.cudnn.benchmark=True
train(1, False)
fixed_noise = create_noise(bs)
set_trainable(netD, True)
set_trainable(netG, True)
optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-5)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-5)
train(1, False)
netD.eval(); netG.eval();
fake = netG(fixed_noise).data.cpu()
faked = np.clip(md.trn_ds.denorm(fake),0,1)
plt.figure(figsize=(9,9))
plt.imshow(gallery(faked, 8));
torch.save(netG.state_dict(), TMP_PATH/'netG_2.h5')
torch.save(netD.state_dict(), TMP_PATH/'netD_2.h5')
###Output
_____no_output_____
###Markdown
**Important: This notebook will only work with fastai-0.7.x. Do not try to run any fastai-1.x code from this path in the repository because it will load fastai-0.7.x**
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
WGAN
###Code
from fastai.conv_learner import *
from fastai.dataset import *
import gzip
torch.cuda.set_device(3)
###Output
_____no_output_____
###Markdown
Download the LSUN scene classification dataset bedroom category, unzip it, and convert it to jpg files (the scripts folder is here in the `dl2` folder):```curl 'http://lsun.cs.princeton.edu/htbin/download.cgi?tag=latest&category=bedroom&set=train' -o bedroom.zipunzip bedroom.zippip install lmdbpython lsun-data.py {PATH}/bedroom_train_lmdb --out_dir {PATH}/bedroom```This isn't tested on Windows - if it doesn't work, you could use a Linux box to convert the files, then copy them over. Alternatively, you can download [this 20% sample](https://www.kaggle.com/jhoward/lsun_bedroom) from Kaggle datasets.
###Code
PATH = Path('data/lsun/')
IMG_PATH = PATH/'bedroom'
CSV_PATH = PATH/'files.csv'
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
files = PATH.glob('bedroom/**/*.jpg')
with CSV_PATH.open('w') as fo:
for f in files: fo.write(f'{f.relative_to(IMG_PATH)},0\n')
# Optional - sampling a subset of files
CSV_PATH = PATH/'files_sample.csv'
files = PATH.glob('bedroom/**/*.jpg')
with CSV_PATH.open('w') as fo:
for f in files:
if random.random()<0.1: fo.write(f'{f.relative_to(IMG_PATH)},0\n')
class ConvBlock(nn.Module):
def __init__(self, ni, no, ks, stride, bn=True, pad=None):
super().__init__()
if pad is None: pad = ks//2//stride
self.conv = nn.Conv2d(ni, no, ks, stride, padding=pad, bias=False)
self.bn = nn.BatchNorm2d(no) if bn else None
self.relu = nn.LeakyReLU(0.2, inplace=True)
def forward(self, x):
x = self.relu(self.conv(x))
return self.bn(x) if self.bn else x
class DCGAN_D(nn.Module):
def __init__(self, isize, nc, ndf, n_extra_layers=0):
super().__init__()
assert isize % 16 == 0, "isize has to be a multiple of 16"
self.initial = ConvBlock(nc, ndf, 4, 2, bn=False)
csize,cndf = isize/2,ndf
self.extra = nn.Sequential(*[ConvBlock(cndf, cndf, 3, 1)
for t in range(n_extra_layers)])
pyr_layers = []
while csize > 4:
pyr_layers.append(ConvBlock(cndf, cndf*2, 4, 2))
cndf *= 2; csize /= 2
self.pyramid = nn.Sequential(*pyr_layers)
self.final = nn.Conv2d(cndf, 1, 4, padding=0, bias=False)
def forward(self, input):
x = self.initial(input)
x = self.extra(x)
x = self.pyramid(x)
return self.final(x).mean(0).view(1)
class DeconvBlock(nn.Module):
def __init__(self, ni, no, ks, stride, pad, bn=True):
super().__init__()
self.conv = nn.ConvTranspose2d(ni, no, ks, stride, padding=pad, bias=False)
self.bn = nn.BatchNorm2d(no)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
x = self.relu(self.conv(x))
return self.bn(x) if self.bn else x
class DCGAN_G(nn.Module):
def __init__(self, isize, nz, nc, ngf, n_extra_layers=0):
super().__init__()
assert isize % 16 == 0, "isize has to be a multiple of 16"
cngf, tisize = ngf//2, 4
while tisize!=isize: cngf*=2; tisize*=2
layers = [DeconvBlock(nz, cngf, 4, 1, 0)]
csize, cndf = 4, cngf
while csize < isize//2:
layers.append(DeconvBlock(cngf, cngf//2, 4, 2, 1))
cngf //= 2; csize *= 2
layers += [DeconvBlock(cngf, cngf, 3, 1, 1) for t in range(n_extra_layers)]
layers.append(nn.ConvTranspose2d(cngf, nc, 4, 2, 1, bias=False))
self.features = nn.Sequential(*layers)
def forward(self, input): return F.tanh(self.features(input))
bs,sz,nz = 64,64,100
tfms = tfms_from_stats(inception_stats, sz)
md = ImageClassifierData.from_csv(PATH, 'bedroom', CSV_PATH, tfms=tfms, bs=128,
skip_header=False, continuous=True)
md = md.resize(128)
x,_ = next(iter(md.val_dl))
plt.imshow(md.trn_ds.denorm(x)[0]);
netG = DCGAN_G(sz, nz, 3, 64, 1).cuda()
netD = DCGAN_D(sz, 3, 64, 1).cuda()
def create_noise(b): return V(torch.zeros(b, nz, 1, 1).normal_(0, 1))
preds = netG(create_noise(4))
pred_ims = md.trn_ds.denorm(preds)
fig, axes = plt.subplots(2, 2, figsize=(6, 6))
for i,ax in enumerate(axes.flat): ax.imshow(pred_ims[i])
def gallery(x, nc=3):
n,h,w,c = x.shape
nr = n//nc
assert n == nr*nc
return (x.reshape(nr, nc, h, w, c)
.swapaxes(1,2)
.reshape(h*nr, w*nc, c))
optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-4)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-4)
def train(niter, first=True):
gen_iterations = 0
for epoch in trange(niter):
netD.train(); netG.train()
data_iter = iter(md.trn_dl)
i,n = 0,len(md.trn_dl)
with tqdm(total=n) as pbar:
while i < n:
set_trainable(netD, True)
set_trainable(netG, False)
d_iters = 100 if (first and (gen_iterations < 25) or (gen_iterations % 500 == 0)) else 5
j = 0
while (j < d_iters) and (i < n):
j += 1; i += 1
for p in netD.parameters(): p.data.clamp_(-0.01, 0.01)
real = V(next(data_iter)[0])
real_loss = netD(real)
fake = netG(create_noise(real.size(0)))
fake_loss = netD(V(fake.data))
netD.zero_grad()
lossD = real_loss-fake_loss
lossD.backward()
optimizerD.step()
pbar.update()
set_trainable(netD, False)
set_trainable(netG, True)
netG.zero_grad()
lossG = netD(netG(create_noise(bs))).mean(0).view(1)
lossG.backward()
optimizerG.step()
gen_iterations += 1
print(f'Loss_D {to_np(lossD)}; Loss_G {to_np(lossG)}; '
f'D_real {to_np(real_loss)}; Loss_D_fake {to_np(fake_loss)}')
torch.backends.cudnn.benchmark=True
train(1, False)
fixed_noise = create_noise(bs)
set_trainable(netD, True)
set_trainable(netG, True)
optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-5)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-5)
train(1, False)
netD.eval(); netG.eval();
fake = netG(fixed_noise).data.cpu()
faked = np.clip(md.trn_ds.denorm(fake),0,1)
plt.figure(figsize=(9,9))
plt.imshow(gallery(faked, 8));
torch.save(netG.state_dict(), TMP_PATH/'netG_2.h5')
torch.save(netD.state_dict(), TMP_PATH/'netD_2.h5')
###Output
_____no_output_____
###Markdown
WGAN
###Code
from fastai.conv_learner import *
from fastai.dataset import *
import gzip
torch.cuda.set_device(3)
###Output
_____no_output_____
###Markdown
Download the LSUN scene classification dataset bedroom category, unzip it, and convert it to jpg files (the scripts folder is here in the `dl2` folder):```curl 'http://lsun.cs.princeton.edu/htbin/download.cgi?tag=latest&category=bedroom&set=train' -o bedroom.zipunzip bedroom.zippip install lmdbpython lsun-data.py {PATH}/bedroom_train_lmdb --out_dir {PATH}/bedroom```This isn't tested on Windows - if it doesn't work, you could use a Linux box to convert the files, then copy them over. Alternatively, you can download [this 20% sample](https://www.kaggle.com/jhoward/lsun_bedroom) from Kaggle datasets.
###Code
PATH = Path('data/lsun/')
IMG_PATH = PATH/'bedroom'
CSV_PATH = PATH/'files.csv'
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
files = PATH.glob('bedroom/**/*.jpg')
with CSV_PATH.open('w') as fo:
for f in files: fo.write(f'{f.relative_to(IMG_PATH)},0\n')
# Optional - sampling a subset of files
CSV_PATH = PATH/'files_sample.csv'
files = PATH.glob('bedroom/**/*.jpg')
with CSV_PATH.open('w') as fo:
for f in files:
if random.random()<0.1: fo.write(f'{f.relative_to(IMG_PATH)},0\n')
class ConvBlock(nn.Module):
def __init__(self, ni, no, ks, stride, bn=True, pad=None):
super().__init__()
if pad is None: pad = ks//2//stride
self.conv = nn.Conv2d(ni, no, ks, stride, padding=pad, bias=False)
self.bn = nn.BatchNorm2d(no)
self.relu = nn.LeakyReLU(0.2, inplace=True)
def forward(self, x):
return self.bn(self.relu(self.conv(x)))
class DCGAN_D(nn.Module):
def __init__(self, isize, nc, ndf, n_extra_layers=0):
super().__init__()
assert isize % 16 == 0, "isize has to be a multiple of 16"
self.initial = ConvBlock(nc, ndf, 4, 2, bn=False)
csize,cndf = isize/2,ndf
self.extra = nn.Sequential(*[ConvBlock(cndf, cndf, 3, 1)
for t in range(n_extra_layers)])
pyr_layers = []
while csize > 4:
pyr_layers.append(ConvBlock(cndf, cndf*2, 4, 2))
cndf *= 2; csize /= 2
self.pyramid = nn.Sequential(*pyr_layers)
self.final = nn.Conv2d(cndf, 1, 4, padding=0, bias=False)
def forward(self, input):
x = self.initial(input)
x = self.extra(x)
x = self.pyramid(x)
return self.final(x).mean(0).view(1)
class DeconvBlock(nn.Module):
def __init__(self, ni, no, ks, stride, pad, bn=True):
super().__init__()
self.conv = nn.ConvTranspose2d(ni, no, ks, stride, padding=pad, bias=False)
self.bn = nn.BatchNorm2d(no)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
return self.bn(self.relu(self.conv(x)))
class DCGAN_G(nn.Module):
def __init__(self, isize, nz, nc, ngf, n_extra_layers=0):
super().__init__()
assert isize % 16 == 0, "isize has to be a multiple of 16"
cngf, tisize = ngf//2, 4
while tisize!=isize: cngf*=2; tisize*=2
layers = [DeconvBlock(nz, cngf, 4, 1, 0)]
csize, cndf = 4, cngf
while csize < isize//2:
layers.append(DeconvBlock(cngf, cngf//2, 4, 2, 1))
cngf //= 2; csize *= 2
layers += [DeconvBlock(cngf, cngf, 3, 1, 1) for t in range(n_extra_layers)]
layers.append(nn.ConvTranspose2d(cngf, nc, 4, 2, 1, bias=False))
self.features = nn.Sequential(*layers)
def forward(self, input): return F.tanh(self.features(input))
bs,sz,nz = 64,64,100
tfms = tfms_from_stats(inception_stats, sz)
md = ImageClassifierData.from_csv(PATH, 'bedroom', CSV_PATH, tfms=tfms, bs=128,
skip_header=False, continuous=True)
md = md.resize(128)
x,_ = next(iter(md.val_dl))
plt.imshow(md.trn_ds.denorm(x)[0]);
netG = DCGAN_G(sz, nz, 3, 64, 1).cuda()
netD = DCGAN_D(sz, 3, 64, 1).cuda()
def create_noise(b): return V(torch.zeros(b, nz, 1, 1).normal_(0, 1))
preds = netG(create_noise(4))
pred_ims = md.trn_ds.denorm(preds)
fig, axes = plt.subplots(2, 2, figsize=(6, 6))
for i,ax in enumerate(axes.flat): ax.imshow(pred_ims[i])
def gallery(x, nc=3):
n,h,w,c = x.shape
nr = n//nc
assert n == nr*nc
return (x.reshape(nr, nc, h, w, c)
.swapaxes(1,2)
.reshape(h*nr, w*nc, c))
optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-4)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-4)
def train(niter, first=True):
gen_iterations = 0
for epoch in trange(niter):
netD.train(); netG.train()
data_iter = iter(md.trn_dl)
i,n = 0,len(md.trn_dl)
with tqdm(total=n) as pbar:
while i < n:
set_trainable(netD, True)
set_trainable(netG, False)
d_iters = 100 if (first and (gen_iterations < 25) or (gen_iterations % 500 == 0)) else 5
j = 0
while (j < d_iters) and (i < n):
j += 1; i += 1
for p in netD.parameters(): p.data.clamp_(-0.01, 0.01)
real = V(next(data_iter)[0])
real_loss = netD(real)
fake = netG(create_noise(real.size(0)))
fake_loss = netD(V(fake.data))
netD.zero_grad()
lossD = real_loss-fake_loss
lossD.backward()
optimizerD.step()
pbar.update()
set_trainable(netD, False)
set_trainable(netG, True)
netG.zero_grad()
lossG = netD(netG(create_noise(bs))).mean(0).view(1)
lossG.backward()
optimizerG.step()
gen_iterations += 1
print(f'Loss_D {to_np(lossD)}; Loss_G {to_np(lossG)}; '
f'D_real {to_np(real_loss)}; Loss_D_fake {to_np(fake_loss)}')
torch.backends.cudnn.benchmark=True
train(1, False)
fixed_noise = create_noise(bs)
set_trainable(netD, True)
set_trainable(netG, True)
optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-5)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-5)
train(1, False)
netD.eval(); netG.eval();
fake = netG(fixed_noise).data.cpu()
faked = np.clip(md.trn_ds.denorm(fake),0,1)
plt.figure(figsize=(9,9))
plt.imshow(gallery(faked, 8));
torch.save(netG.state_dict(), TMP_PATH/'netG_2.h5')
torch.save(netD.state_dict(), TMP_PATH/'netD_2.h5')
###Output
_____no_output_____
###Markdown
WGAN
###Code
from fastai.conv_learner import *
from fastai.dataset import *
import gzip
torch.cuda.set_device(3)
###Output
_____no_output_____
###Markdown
Download the LSUN scene classification dataset bedroom category, unzip it, and convert it to jpg files (the scripts folder is here in the `dl2` folder):```curl 'http://lsun.cs.princeton.edu/htbin/download.cgi?tag=latest&category=bedroom&set=train' -o bedroom.zipunzip bedroom.zippip install lmdbpython lsun-data.py {PATH}/bedroom_train_lmdb --out_dir {PATH}/bedroom```This isn't tested on Windows - if it doesn't work, you could use a Linux box to convert the files, then copy them over. Alternatively, you can download [this 20% sample](https://www.kaggle.com/jhoward/lsun_bedroom) from Kaggle datasets.
###Code
PATH = Path('data/lsun/')
IMG_PATH = PATH/'bedroom'
CSV_PATH = PATH/'files.csv'
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
files = PATH.glob('bedroom/**/*.jpg')
with CSV_PATH.open('w') as fo:
for f in files: fo.write(f'{f.relative_to(IMG_PATH)},0\n')
# Optional - sampling a subset of files
CSV_PATH = PATH/'files_sample.csv'
files = PATH.glob('bedroom/**/*.jpg')
with CSV_PATH.open('w') as fo:
for f in files:
if random.random()<0.1: fo.write(f'{f.relative_to(IMG_PATH)},0\n')
class ConvBlock(nn.Module):
def __init__(self, ni, no, ks, stride, bn=True, pad=None):
super().__init__()
if pad is None: pad = ks//2//stride
self.conv = nn.Conv2d(ni, no, ks, stride, padding=pad, bias=False)
self.bn = nn.BatchNorm2d(no) if bn else None
self.relu = nn.LeakyReLU(0.2, inplace=True)
def forward(self, x):
x = self.relu(self.conv(x))
return self.bn(x) if self.bn else x
class DCGAN_D(nn.Module):
def __init__(self, isize, nc, ndf, n_extra_layers=0):
super().__init__()
assert isize % 16 == 0, "isize has to be a multiple of 16"
self.initial = ConvBlock(nc, ndf, 4, 2, bn=False)
csize,cndf = isize/2,ndf
self.extra = nn.Sequential(*[ConvBlock(cndf, cndf, 3, 1)
for t in range(n_extra_layers)])
pyr_layers = []
while csize > 4:
pyr_layers.append(ConvBlock(cndf, cndf*2, 4, 2))
cndf *= 2; csize /= 2
self.pyramid = nn.Sequential(*pyr_layers)
self.final = nn.Conv2d(cndf, 1, 4, padding=0, bias=False)
def forward(self, input):
x = self.initial(input)
x = self.extra(x)
x = self.pyramid(x)
return self.final(x).mean(0).view(1)
class DeconvBlock(nn.Module):
def __init__(self, ni, no, ks, stride, pad, bn=True):
super().__init__()
self.conv = nn.ConvTranspose2d(ni, no, ks, stride, padding=pad, bias=False)
self.bn = nn.BatchNorm2d(no)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
x = self.relu(self.conv(x))
return self.bn(x) if self.bn else x
class DCGAN_G(nn.Module):
def __init__(self, isize, nz, nc, ngf, n_extra_layers=0):
super().__init__()
assert isize % 16 == 0, "isize has to be a multiple of 16"
cngf, tisize = ngf//2, 4
while tisize!=isize: cngf*=2; tisize*=2
layers = [DeconvBlock(nz, cngf, 4, 1, 0)]
csize, cndf = 4, cngf
while csize < isize//2:
layers.append(DeconvBlock(cngf, cngf//2, 4, 2, 1))
cngf //= 2; csize *= 2
layers += [DeconvBlock(cngf, cngf, 3, 1, 1) for t in range(n_extra_layers)]
layers.append(nn.ConvTranspose2d(cngf, nc, 4, 2, 1, bias=False))
self.features = nn.Sequential(*layers)
def forward(self, input): return F.tanh(self.features(input))
bs,sz,nz = 64,64,100
tfms = tfms_from_stats(inception_stats, sz)
md = ImageClassifierData.from_csv(PATH, 'bedroom', CSV_PATH, tfms=tfms, bs=128,
skip_header=False, continuous=True)
md = md.resize(128)
x,_ = next(iter(md.val_dl))
plt.imshow(md.trn_ds.denorm(x)[0]);
netG = DCGAN_G(sz, nz, 3, 64, 1).cuda()
netD = DCGAN_D(sz, 3, 64, 1).cuda()
def create_noise(b): return V(torch.zeros(b, nz, 1, 1).normal_(0, 1))
preds = netG(create_noise(4))
pred_ims = md.trn_ds.denorm(preds)
fig, axes = plt.subplots(2, 2, figsize=(6, 6))
for i,ax in enumerate(axes.flat): ax.imshow(pred_ims[i])
def gallery(x, nc=3):
n,h,w,c = x.shape
nr = n//nc
assert n == nr*nc
return (x.reshape(nr, nc, h, w, c)
.swapaxes(1,2)
.reshape(h*nr, w*nc, c))
optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-4)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-4)
def train(niter, first=True):
gen_iterations = 0
for epoch in trange(niter):
netD.train(); netG.train()
data_iter = iter(md.trn_dl)
i,n = 0,len(md.trn_dl)
with tqdm(total=n) as pbar:
while i < n:
set_trainable(netD, True)
set_trainable(netG, False)
d_iters = 100 if (first and (gen_iterations < 25) or (gen_iterations % 500 == 0)) else 5
j = 0
while (j < d_iters) and (i < n):
j += 1; i += 1
for p in netD.parameters(): p.data.clamp_(-0.01, 0.01)
real = V(next(data_iter)[0])
real_loss = netD(real)
fake = netG(create_noise(real.size(0)))
fake_loss = netD(V(fake.data))
netD.zero_grad()
lossD = real_loss-fake_loss
lossD.backward()
optimizerD.step()
pbar.update()
set_trainable(netD, False)
set_trainable(netG, True)
netG.zero_grad()
lossG = netD(netG(create_noise(bs))).mean(0).view(1)
lossG.backward()
optimizerG.step()
gen_iterations += 1
print(f'Loss_D {to_np(lossD)}; Loss_G {to_np(lossG)}; '
f'D_real {to_np(real_loss)}; Loss_D_fake {to_np(fake_loss)}')
torch.backends.cudnn.benchmark=True
train(1, False)
fixed_noise = create_noise(bs)
set_trainable(netD, True)
set_trainable(netG, True)
optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-5)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-5)
train(1, False)
netD.eval(); netG.eval();
fake = netG(fixed_noise).data.cpu()
faked = np.clip(md.trn_ds.denorm(fake),0,1)
plt.figure(figsize=(9,9))
plt.imshow(gallery(faked, 8));
torch.save(netG.state_dict(), TMP_PATH/'netG_2.h5')
torch.save(netD.state_dict(), TMP_PATH/'netD_2.h5')
###Output
_____no_output_____ |
Metodo_de_Euler.ipynb | ###Markdown
###Code
#importar las librerias
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def euler_sig(yn,h,f):
y_sig=yn+h*f(yn)
return y_sig
def euler(y0,h,f,ti,tf):
N=int((tf-ti)/h)
t=np.linspace(ti,tf,N+1)
y=np.zeros(N+1)
y[0]=y0
for i in range(0,N):
y[i+1]=euler_sig(y[i],h,f)
return t,y
def g(y):
crecimiento=np.log(2)*y
return crecimiento
# Inicialzar (Valores de las variables)
h=0.1
ti=0
tf=10
yi=1
#Sistema numérico
t,y=euler(yi,h,g,ti,tf)
plt.plot(t,y, label='S.N.')
#sistema analítico
plt.plot(t,np.power(2,t),label='S.A.')
plt.legend()
last_SN=y[-1]
last_SA=np.power(2,t[-1])
ER=last_SA-last_SN
print('El error del método para un h= {} es de {}'.format(h,(ER)))
def r(y):
k=2
m=1000
crecimiento=k*y*(1-(y/m))
return crecimiento
# Inicialzar (Valores de las variables)
h=0.1
ti=0
tf=10
yi=1
#Sistema numérico
t,y=euler(yi,h,r,ti,tf)
plt.plot(t,y, label='S.N.')
plt.legend()
###Output
_____no_output_____ |
examples/stock_exchange_v1/stock_exchange_v1.ipynb | ###Markdown
Stock Exchange Example Install Requirements
###Code
!pip install --quiet agent-exchange==0.0.5 pandas numpy matplotlib seaborn
from random import shuffle, random, randint
from typing import Sequence, Tuple, Dict, Union
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
from agent_exchange.agent import Agent
from agent_exchange.exchange import Exchange
from stock_exchange_v1 import * # requires stock_exchange_v1.py file to be local
from stock_agent_v1 import * # requires stock_agent_v1.py file to be local
%matplotlib inline
###Output
_____no_output_____
###Markdown
Make Stock Exchange Agents Market MakerA market maker provides liquidity to the market by placing limit orders. Here we implement a naive market maker that randomly places bid and asks, but always places them just inside the spread.
###Code
class StockAgentV1NaiveMaker(StockAgentV1):
"""Naive agent that acts as a market maker.
If there is a spread, this agent will place
an order on the buy side 1/2 of the time, on
the sell side 1/2 of the time.
Note: in V1, there is no way to place multiple
orders within the same time step.
"""
def __init__(self, initial_num_shares=1000, initial_capital=100000, max_margin=1000):
self.max_margin = max_margin
super().__init__(initial_num_shares, initial_capital)
def get_action(self, order_book: StockExchangeV1OrderBook):
if random() < .5: # buy just over current bid
buy_price = order_book.get_bid() + 0.01 # penny-up on the market-clearing buy price
# Randomly buy as little as 0 or as much as 100
buy_amount = randint(0, 1000) # limit buy
order = StockExchangeV1Action(StockExchangeV1OrderTypes.LIMIT, buy_amount, buy_price)
return order
else: # sell just under current ask
sell_price = order_book.get_ask() - 0.01
# Randomly sell as little as 0 and as much as 10
sell_amount = randint(0, 1000)
order = StockExchangeV1Action(StockExchangeV1OrderTypes.LIMIT, -sell_amount, sell_price)
return order
###Output
_____no_output_____
###Markdown
Market _taker_A taker is a market participant that takes liquidity by placing market orders. Here we implement a naive market taker who buys and sells at random.
###Code
class StockAgentV1NaiveTaker(StockAgentV1):
"""Naive agent that acts as a liquidity
taker. This agent speculates by placing
market orders of buy 1/2 of the time and
sell 1/2 of the time. Here we ignore
constraints on short selling, allowing
agents to shorts sell without limit.
Also, if the taker's order exhausts the
order book, then only the portion of their
order in the order book gets filled.
"""
def __init__(self, initial_num_shares=1000, initial_capital=100000):
super().__init__(initial_num_shares, initial_capital)
def get_action(self, order_book: StockExchangeV1OrderBook):
if random() < .5: # buy
expected_buy_price = order_book.get_ask()
# Randomly decide on how much to buy
max_buy_amount = self.internal_state.get_capital() // expected_buy_price
max_buy_amount = max(max_buy_amount, 0)
num_shares_to_buy = randint(0, max_buy_amount)
return StockExchangeV1Action(StockExchangeV1OrderTypes.MARKET, num_shares_to_buy)
else:
expected_sell_price = order_book.get_bid()
# Randomly decide on how much to sell
max_sell_amount = max(0, self.internal_state.get_num_shares())
num_shares_to_sell = randint(0, max_sell_amount)
return StockExchangeV1Action(StockExchangeV1OrderTypes.MARKET, -num_shares_to_sell)
###Output
_____no_output_____
###Markdown
Run simulationsNow that we have made a market maker and a market taker, we can run a simulation with them!
###Code
def portfolio_values(ex: StockExchangeV1, ag: StockAgentV1):
"""Value of the agent over time; helps to get visualize agents' values
"""
shares_over_time, capital_over_time = ag.internal_state.num_shares, ag.internal_state.capital
bids, asks = ex.get_bids(), ex.get_asks()
estimated_prices = [(bid + ask)/2 for bid, ask in zip(bids, asks)] # estimate price as midpoint between bid and ask
values = []
for shares, capital, estd_price in zip(shares_over_time, capital_over_time, estimated_prices):
values.append(shares*estd_price + capital)
return values
def run_naive_experiment(num_makers, num_takers, num_steps, verbose=True):
INITIAL_N_SHARES, INITIAL_CAPITAL, INITIAL_PRICE = 1000, 100000, 100
initial_value = INITIAL_PRICE * INITIAL_N_SHARES + INITIAL_CAPITAL
agents = [StockAgentV1NaiveMaker(INITIAL_N_SHARES, INITIAL_CAPITAL) for _ in range(num_makers)]
agents += [StockAgentV1NaiveTaker(INITIAL_N_SHARES, INITIAL_CAPITAL) for _ in range(num_takers)]
exchange = StockExchangeV1(agents)
exchange.simulate_steps(num_steps)
# Put portfolio data together
all_portfolio_values = pd.DataFrame()
for i, agent in enumerate(agents):
all_portfolio_values[i] = portfolio_values(exchange, agent)
maker_indices = set(all_portfolio_values.columns[:num_makers])
taker_indices = set(all_portfolio_values.columns) - maker_indices
maker_portfolio_values = all_portfolio_values[maker_indices]
taker_portfolio_values = all_portfolio_values[taker_indices]
# Get overall returns
maker_final_value = maker_portfolio_values.iloc[-1]
taker_final_value = taker_portfolio_values.iloc[-1]
avg_maker_profit = sum(maker_final_value)/len(maker_final_value) - initial_value
avg_taker_profit = sum(taker_final_value)/len(taker_final_value) - initial_value
avg_maker_return = avg_maker_profit / initial_value
avg_taker_return = avg_taker_profit / initial_value
# Visualize market price fluctuations
if verbose:
plt.style.use('seaborn-colorblind')
w, h = 20, 10
matplotlib.rcParams['figure.figsize'] = [w, h]
_, ((ax1, ax2), (ax3, _)) = plt.subplots(nrows=2, ncols=2, sharex=True)
exchange_sim_data = pd.DataFrame()
exchange_sim_data['bids'] = exchange.get_bids()
exchange_sim_data['asks'] = exchange.get_asks()
sns.lineplot(data=exchange_sim_data, ax=ax1)
ax1.set_title("Market prices")
ax1.set_xlabel("Iteration")
ax1.set_ylabel("Price ($)")
## Maker portfolio values
sns.lineplot(data=maker_portfolio_values.head(num_steps), ax=ax2, legend=False)
ax2.set_title("Maker Value vs. Iteration")
ax2.set_xlabel("Iteration")
ax2.set_ylabel("Value ($)")
print(f"Maker stats:\n{maker_final_value.describe()}")
## Taker portfolio values
sns.lineplot(data=taker_portfolio_values.head(num_steps), ax=ax3, legend=False)
ax3.set_title("Taker Value vs. Iteration")
ax3.set_xlabel("Iteration")
ax3.set_ylabel("Value ($)")
print(f"Taker stats:\n{taker_final_value.describe()}")
return avg_maker_return, avg_taker_return
m_ret, t_ret = run_naive_experiment(num_makers=10, num_takers=10, num_steps=30, verbose=False)
m_ret, t_ret
n_iterations = 10
maker_range = range(1, 8)
taker_range = range(1, 8)
maker_returns = pd.DataFrame(index=maker_range, columns=taker_range)
taker_returns = pd.DataFrame(index=maker_range, columns=taker_range)
for n_makers in maker_range:
for n_takers in taker_range:
maker_return, taker_return = run_naive_experiment(num_makers=10, num_takers=10, num_steps=30,verbose=False)
maker_returns.loc[n_makers][n_takers] = maker_return
taker_returns.loc[n_makers][n_takers] = taker_return
print(f"Done with ({n_makers}, {n_takers})")
# Maker returns
maker_returns.fillna(value=np.nan, inplace=True)
matplotlib.rcParams['figure.figsize'] = [13, 10]
sns.heatmap(maker_returns, center=0);
# Taker returns
taker_returns.fillna(value=np.nan, inplace=True)
sns.heatmap(taker_returns, center=0);
_ = run_naive_experiment(num_makers=3, num_takers=10, num_steps=100)
# _ = run_naive_experiment(num_makers=5, num_takers=10, num_steps=100, num_plot_steps=100)
# _ = run_naive_experiment(num_makers=8, num_takers=10, num_steps=100, num_plot_steps=100)
# _ = run_naive_experiment(num_makers=12, num_takers=8, num_steps=100, num_plot_steps=100)
###Output
_____no_output_____ |
Lab3-01-GluonNLP-demo/gluon-nlp-demo-na-summit.ipynb | ###Markdown
Using Pre-trained Word Embeddings Word Embedding - Numerical representation for language How? *"You shall know a word by the company it keeps."* - John Rupert Firth **Tezgüino** <- What does this word mean? * A bottle of *Tezgüino* is on the table* *Tezgüino* makes you drunk* Everybody likes *Tezgüino* How about now? Examples Word2VecFastTextGloVe Let's see these in practice
###Code
!pip install gluonnlp
from mxnet import gluon
from mxnet import nd
import gluonnlp as nlp
import re
text = " hello world \n hello nice world \n hi world \n"
###Output
_____no_output_____
###Markdown
We need a tokenizer to process this string
###Code
def simple_tokenize(source_str, token_delim=' ', seq_delim='\n'):
return filter(None, re.split(token_delim + '|' + seq_delim, source_str))
counter = nlp.data.count_tokens(simple_tokenize(text))
counter
vocab = nlp.Vocab(counter)
vocab.idx_to_token
fasttext_simple = nlp.embedding.create('fasttext', source='wiki.simple')
vocab.set_embedding(fasttext_simple)
vocab.embedding['beautiful']
vocab.embedding['hello', 'world'][:, :5]
###Output
_____no_output_____
###Markdown
Application of Pre-trained Word Embeddings
###Code
embedding = nlp.embedding.create('glove', source='glove.6B.50d')
vocab = nlp.Vocab(nlp.data.Counter(embedding.idx_to_token))
vocab.set_embedding(embedding)
len(vocab.idx_to_token)
print(vocab['beautiful'])
print(vocab.idx_to_token[71424])
###Output
71424
beautiful
###Markdown
Word Similarity
###Code
def cos_sim(x, y):
return nd.dot(x, y) / (nd.norm(x) * nd.norm(y))
def norm_vecs_by_row(x):
return x / nd.sqrt(nd.sum(x * x, axis=1)).reshape((-1,1))
def get_knn(vocab, k, word):
word_vec = vocab.embedding[word].reshape((-1, 1))
vocab_vecs = norm_vecs_by_row(vocab.embedding.idx_to_vec)
dot_prod = nd.dot(vocab_vecs[4:], word_vec)
indices = nd.topk(dot_prod.squeeze(), k=k+1, ret_typ='indices')
indices = [int(i.asscalar())+4 for i in indices]
# Remove unknown and input tokens.
return vocab.to_tokens(indices[1:])
get_knn(vocab, 5, 'baby')
###Output
_____no_output_____
###Markdown
We can verify the cosine similarity of vectors of 'baby' and 'babies'.
###Code
cos_sim(vocab.embedding['baby'], vocab.embedding['babies'])
###Output
_____no_output_____
###Markdown
Let us find the 5 most similar words of 'beautiful' from the vocabulary.
###Code
get_knn(vocab, 5, 'beautiful')
###Output
_____no_output_____
###Markdown
Word Analogy
###Code
def get_top_k_by_analogy(vocab, k, word1, word2, word3):
word_vecs = vocab.embedding[word1, word2, word3]
word_diff = (word_vecs[1] - word_vecs[0] + word_vecs[2])
vocab_vecs = norm_vecs_by_row(vocab.embedding.idx_to_vec)
dot_prod = nd.dot(vocab_vecs[4:], word_diff.squeeze()).squeeze()
indices = dot_prod.topk(k=k, ret_typ='indices')
indices = [int(i.asscalar())+4 for i in indices]
return vocab.to_tokens(indices)
get_top_k_by_analogy(vocab, 1, 'man', 'woman', 'son')
get_top_k_by_analogy(vocab, 3, 'argentina', 'messi', 'france')
get_top_k_by_analogy(vocab, 1, 'argentina', 'football', 'india')
get_top_k_by_analogy(vocab, 1, 'france', 'crepes', 'argentina')
###Output
_____no_output_____
###Markdown
Text Classification with
###Code
import mxnet as mx
from mxnet import nd, autograd, gluon
###Output
_____no_output_____
###Markdown
Data Download
###Code
base_url = 'http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/'
prefix = 'reviews_'
suffix = '_5.json.gz'
folder = 'data'
categories = [
'Home_and_Kitchen', ""
'Books',
'CDs_and_Vinyl',
'Movies_and_TV',
'Cell_Phones_and_Accessories',
'Sports_and_Outdoors',
'Clothing_Shoes_and_Jewelry'
]
!mkdir -p $folder
for category in categories:
print(category)
url = base_url+prefix+category+suffix
!wget -P $folder $url -nc -nv
###Output
Home_and_Kitchen
Books
CDs_and_Vinyl
Movies_and_TV
Cell_Phones_and_Accessories
Sports_and_Outdoors
Clothing_Shoes_and_Jewelry
###Markdown
Load and Preprocess Data
###Code
import pandas as pd
import gzip
def parse(path):
g = gzip.open(path, 'rb')
for line in g:
yield eval(line)
def get_dataframe(path, num_lines):
i = 0
df = {}
for d in parse(path):
if i > num_lines:
break
df[i] = d
i += 1
return pd.DataFrame.from_dict(df, orient='index')
MAX_ITEMS_PER_CATEGORY = 250000
# Loading data from file if exist
try:
data = pd.read_pickle('pickleddata.pkl')
except:
data = None
if data is None:
data = pd.DataFrame(data={'X':[],'Y':[]})
for index, category in enumerate(categories):
df = get_dataframe("{}/{}{}{}".format(folder, prefix, category, suffix), MAX_ITEMS_PER_CATEGORY)
# Each review's summary is prepended to the main review text
df = pd.DataFrame(data={'X':(df['summary']+' | '+df['reviewText'])[:MAX_ITEMS_PER_CATEGORY],'Y':index})
data = data.append(df)
print('{}:{} reviews'.format(category, len(df)))
# Shuffle the samples
data = data.sample(frac=1)
data.reset_index(drop=True, inplace=True)
# Saving the data in a pickled file
pd.to_pickle(data, 'pickleddata.pkl')
###Output
_____no_output_____
###Markdown
Visualize Data
###Code
data.head()
###Output
_____no_output_____
###Markdown
Gluon `Dataset` and `Dataloader `
###Code
from mxnet.gluon.data import ArrayDataset
from mxnet.gluon.data import DataLoader
import numpy as np
import multiprocessing
ALPHABET = list("abcdefghijklmnopqrstuvwxyz0123456789-,;.!?:'\"/\\|_@#$%^&*~`+ =<>()[]{}") # The 69 characters as specified in the paper
ALPHABET_INDEX = {letter: index for index, letter in enumerate(ALPHABET)} # { a: 0, b: 1, etc}
FEATURE_LEN = 1014 # max-length in characters for one document
NUM_WORKERS = multiprocessing.cpu_count() # number of workers used in the data loading
BATCH_SIZE = 128 # number of documents per batch
def encode(text):
encoded = np.zeros([len(ALPHABET), FEATURE_LEN], dtype='float32')
review = text.lower()[:FEATURE_LEN-1:-1]
i = 0
for letter in text:
if i >= FEATURE_LEN:
break;
if letter in ALPHABET_INDEX:
encoded[ALPHABET_INDEX[letter]][i] = 1
i += 1
return encoded
class AmazonDataSet(ArrayDataset):
# We pre-process the documents on the fly
def __getitem__(self, idx):
return encode(self._data[0][idx]), self._data[1][idx]
split = 0.8
split_index = int(split*len(data))
train_data_X = data['X'][:split_index].values
train_data_Y = data['Y'][:split_index].values
test_data_X = data['X'][split_index:].values
test_data_Y = data['Y'][split_index:].values
train_dataset = AmazonDataSet(train_data_X, train_data_Y)
test_dataset = AmazonDataSet(test_data_X, test_data_Y)
train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=BATCH_SIZE, num_workers=NUM_WORKERS, last_batch='discard')
test_dataloader = DataLoader(test_dataset, shuffle=True, batch_size=BATCH_SIZE, num_workers=NUM_WORKERS, last_batch='discard')
###Output
_____no_output_____
###Markdown
Create the Network
###Code
ctx = mx.gpu() # to run on GPU
NUM_FILTERS = 256 # number of convolutional filters per convolutional layer
NUM_OUTPUTS = len(categories) # number of classes
FULLY_CONNECTED = 1024 # number of unit in the fully connected dense layer
DROPOUT_RATE = 0.5 # probability of node drop out
LEARNING_RATE = 0.01 # learning rate of the gradient
MOMENTUM = 0.9 # momentum of the gradient
WDECAY = 0.00001 # regularization term to limit size of weights
net = gluon.nn.HybridSequential()
with net.name_scope():
net.add(gluon.nn.Conv1D(channels=NUM_FILTERS, kernel_size=7, activation='relu'))
net.add(gluon.nn.MaxPool1D(pool_size=3, strides=3))
net.add(gluon.nn.Conv1D(channels=NUM_FILTERS, kernel_size=7, activation='relu'))
net.add(gluon.nn.MaxPool1D(pool_size=3, strides=3))
net.add(gluon.nn.Conv1D(channels=NUM_FILTERS, kernel_size=3, activation='relu'))
net.add(gluon.nn.Conv1D(channels=NUM_FILTERS, kernel_size=3, activation='relu'))
net.add(gluon.nn.Conv1D(channels=NUM_FILTERS, kernel_size=3, activation='relu'))
net.add(gluon.nn.Conv1D(channels=NUM_FILTERS, kernel_size=3, activation='relu'))
net.add(gluon.nn.MaxPool1D(pool_size=3, strides=3))
net.add(gluon.nn.Flatten())
net.add(gluon.nn.Dense(FULLY_CONNECTED, activation='relu'))
net.add(gluon.nn.Dropout(DROPOUT_RATE))
net.add(gluon.nn.Dense(FULLY_CONNECTED, activation='relu'))
net.add(gluon.nn.Dropout(DROPOUT_RATE))
net.add(gluon.nn.Dense(NUM_OUTPUTS))
###Output
_____no_output_____
###Markdown
Initialize Network Parameters
###Code
hybridize = True # for speed improvement, compile the network but no in-depth debugging possible
load_params = False # Load pre-trained model
if load_params:
net.load_params('crepe_gluon_epoch6.params', ctx=ctx)
else:
net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)
if hybridize:
net.hybridize()
###Output
_____no_output_____
###Markdown
Loss and Optimizer
###Code
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd',
{'learning_rate': LEARNING_RATE,
'wd':WDECAY,
'momentum':MOMENTUM})
###Output
_____no_output_____
###Markdown
Evaluation Metric
###Code
def evaluate_accuracy(data_iterator, net):
acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(data_iterator):
data = data.as_in_context(ctx)
label = label.as_in_context(ctx)
output = net(data)
prediction = nd.argmax(output, axis=1)
if (i%50 == 0):
print("Samples {}".format(i*len(data)))
acc.update(preds=prediction, labels=label)
return acc.get()[1]
###Output
_____no_output_____
###Markdown
Training Loop
###Code
start_epoch = 0
number_epochs = 6
smoothing_constant = .01
for e in range(start_epoch, number_epochs):
for i, (review, label) in enumerate(train_dataloader):
review = review.as_in_context(ctx)
label = label.as_in_context(ctx)
with autograd.record():
output = net(review)
loss = softmax_cross_entropy(output, label)
loss.backward()
trainer.step(review.shape[0])
# moving average of the loss
curr_loss = nd.mean(loss).asscalar()
moving_loss = (curr_loss if (i == 0)
else (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss)
if (i%50 == 0):
nd.waitall()
print('Batch {}:{},{}'.format(i,curr_loss,moving_loss))
test_accuracy = evaluate_accuracy(test_dataloader, net)
#Save the model using the gluon params format
net.save_params('crepe_epoch_{}_test_acc_{}.params'.format(e,int(test_accuracy*10000)/100))
print("Epoch %s. Loss: %s, Test_acc %s" % (e, moving_loss, test_accuracy))
###Output
Batch 0:1.9453378915786743,1.9453378915786743
Batch 50:1.9373124837875366,1.943315461356685
Batch 100:1.9081898927688599,1.932043050003778
Batch 150:1.8508967161178589,1.9103771596963577
Batch 200:1.9001646041870117,1.897644773367536
Batch 250:1.898385763168335,1.893431542959099
Batch 300:1.856143832206726,1.8850829157477298
Batch 350:1.8237965106964111,1.880243124750212
Batch 400:1.8509269952774048,1.8771633303411561
Batch 450:1.8601528406143188,1.8749653442173433
Batch 500:1.8604050874710083,1.8724235391005881
Batch 550:1.8590565919876099,1.8718379428710714
Batch 600:1.8669337034225464,1.8693746529364783
Batch 650:1.8383421897888184,1.8681572097294508
Batch 700:1.8637014627456665,1.8687037009062701
Batch 750:1.824965000152588,1.8625427316355179
Batch 800:1.878620982170105,1.8560348505611044
Batch 850:1.8122719526290894,1.8397081812034204
Batch 900:1.6698048114776611,1.8129354063315397
Batch 950:1.6810656785964966,1.7841103058811965
Batch 1000:1.6665819883346558,1.7381734343849182
Batch 1050:1.9087369441986084,1.7972570259284935
Batch 1100:1.9249498844146729,1.8330576966287526
Batch 1150:1.848611831665039,1.8529058227425383
Batch 1200:1.8461025953292847,1.8638277963046053
Batch 1250:1.8916798830032349,1.8644350844218518
Batch 1300:1.8725816011428833,1.866532087691531
Batch 1350:1.9139583110809326,1.8665839722617905
Batch 1400:1.8886419534683228,1.8700319859645074
Batch 1450:1.871649980545044,1.8723657607944806
Batch 1500:1.8444362878799438,1.8726145516597001
Batch 1550:1.9569578170776367,1.8740631333724174
Batch 1600:1.847110629081726,1.8682563343259702
Batch 1650:1.8796321153640747,1.8707884232773702
Batch 1700:1.8296846151351929,1.867869264999772
Batch 1750:1.9194695949554443,1.8694412673397642
Batch 1800:1.8853989839553833,1.8638438544336606
Batch 1850:1.878685712814331,1.8634098309520748
Batch 1900:1.86069917678833,1.858698771873116
Batch 1950:1.7538530826568604,1.8365323374799165
Batch 2000:1.7624948024749756,1.841136481176103
Batch 2050:1.6832857131958008,1.8002076730009258
Batch 2100:1.9362415075302124,1.809976246448211
Batch 2150:1.5954055786132812,1.775792594988112
Batch 2200:1.4584770202636719,1.6868196314080954
Batch 2250:1.576601505279541,1.6216554992042063
Batch 2300:1.4812443256378174,1.557055327590551
Batch 2350:1.3834877014160156,1.502853024399113
Batch 2400:1.336965799331665,1.456040571370033
Batch 2450:1.2782310247421265,1.4145691765522772
Batch 2500:1.3771641254425049,1.3936892664590668
Batch 2550:1.2706668376922607,1.361659306358235
Batch 2600:1.2734166383743286,1.3351345937998267
Batch 2650:1.2477741241455078,1.304200714701585
Batch 2700:1.2611356973648071,1.2788309453630742
Batch 2750:1.2107969522476196,1.2864631086156533
Batch 2800:1.1519348621368408,1.2732149880476045
Batch 2850:1.1131393909454346,1.2486884570601255
Batch 2900:1.2893774509429932,1.2235636463360515
Batch 2950:1.3606523275375366,1.2095012342303335
Batch 3000:1.2341409921646118,1.1875330016280221
Batch 3050:1.1036291122436523,1.195664747555009
Batch 3100:1.0621364116668701,1.1890826649339228
Batch 3150:1.0590134859085083,1.1641739952794277
Batch 3200:1.1681071519851685,1.1451550589781414
Batch 3250:1.0229195356369019,1.1325940563554246
Batch 3300:1.079014778137207,1.1141213653569366
Batch 3350:1.2361879348754883,1.1159747056681775
Batch 3400:1.205361008644104,1.105009222446501
Batch 3450:1.1291594505310059,1.0983195917779331
Batch 3500:1.064098596572876,1.087142925409798
Batch 3550:1.099172830581665,1.0852045338908127
Batch 3600:0.9594655632972717,1.0706505950584975
Batch 3650:1.2273164987564087,1.0678795522641864
Batch 3700:1.038029432296753,1.0568314232345457
Batch 3750:1.0345441102981567,1.0514119130218187
Batch 3800:1.0126543045043945,1.0367909375944986
Batch 3850:1.1393961906433105,1.0301392440377153
Batch 3900:1.0488090515136719,1.0106482383602724
Batch 3950:0.9170652031898499,1.024482670403881
Batch 4000:0.9427878856658936,1.0243950783226972
Batch 4050:0.8843975067138672,0.9905536361093508
Batch 4100:0.8888476490974426,1.0504400809502097
Batch 4150:0.9303534626960754,1.0494979573971543
Batch 4200:0.898644208908081,1.0341939556376147
Batch 4250:0.7436461448669434,1.00889533177222
Batch 4300:0.9273042678833008,0.9755362272484599
Batch 4350:0.8877531886100769,0.9674427190319904
Batch 4400:1.1328967809677124,0.9738816294062626
Batch 4450:0.9573812484741211,0.9518427241266093
Batch 4500:0.8913918137550354,0.9629289740318445
Batch 4550:0.8544043898582458,0.9472341048191089
Batch 4600:0.666782796382904,0.9243558497265563
Batch 4650:0.8160680532455444,0.9065996666523378
Batch 4700:0.9485316276550293,0.8833252974388099
Batch 4750:0.8443765044212341,0.8680094950777169
Batch 4800:0.8347811698913574,0.8624728163630457
Batch 4850:0.8099501132965088,0.8490963395452955
Batch 4900:0.7650530934333801,0.8265584423364825
Batch 4950:0.8083596229553223,0.8211039998075594
Batch 5000:0.8614997267723083,0.8259077664067933
Batch 5050:0.7735437750816345,0.808474110675564
Batch 5100:0.645978569984436,0.7982894140102011
Batch 5150:0.7369328141212463,0.7927842614616121
Batch 5200:0.7070375084877014,0.7947179728462571
Batch 5250:0.7669934034347534,0.7793223107741862
Batch 5300:0.648193359375,0.7605232354560412
Batch 5350:0.7416223287582397,0.7585503382679126
Batch 5400:0.7981933951377869,0.7482846768039738
Batch 5450:0.6912879943847656,0.7435689229115255
Batch 5500:0.7619497776031494,0.7363077731690414
Batch 5550:0.5932537317276001,0.7089893360136923
Batch 5600:0.7122735977172852,0.7135108894146994
Batch 5650:0.7742758989334106,0.7011550385053558
Batch 5700:0.5922141671180725,0.6874558257027454
Batch 5750:0.553615927696228,0.6815414922093309
Batch 5800:0.8475807309150696,0.708609440146428
Batch 5850:0.5915058851242065,0.6824490589197143
Batch 5900:0.6742552518844604,0.6640234860546748
Batch 5950:0.6415994167327881,0.6708702698962937
Batch 6000:0.7043026089668274,0.6593984448936463
Batch 6050:0.7187190055847168,0.6541955304561856
Batch 6100:0.596881091594696,0.6548190358955903
Batch 6150:0.708433210849762,0.6339507446425041
Batch 6200:0.65164715051651,0.6277746454389783
Batch 6250:0.5474228262901306,0.6170153056710777
Batch 6300:0.720673680305481,0.61511887096338
Batch 6350:0.6575923562049866,0.6241926481620532
Batch 6400:0.6677784323692322,0.6114716716673397
Batch 6450:0.8161919116973877,0.6039949347469279
Batch 6500:0.6343576908111572,0.5890953920229128
Batch 6550:0.6406301856040955,0.5807989215002083
Batch 6600:0.5672842860221863,0.5723362478117832
Batch 6650:0.6819568872451782,0.5714187825095324
Batch 6700:0.49000605940818787,0.571413701412841
Batch 6750:0.5276893377304077,0.569738663706945
Batch 6800:0.44463661313056946,0.5631019631472166
Batch 6850:0.5261471271514893,0.5534414635015849
Batch 6900:0.378284215927124,0.5506974315674976
Batch 6950:0.5317144989967346,0.5534542084975974
Batch 7000:0.4939468801021576,0.5386011991181794
Batch 7050:0.4034293293952942,0.5394378124185936
Batch 7100:0.6703130006790161,0.5385360586478665
Batch 7150:0.42178627848625183,0.5430131880563699
Batch 7200:0.5539587140083313,0.530087350948357
Batch 7250:0.5476998686790466,0.5334676622804818
Batch 7300:0.4491931200027466,0.5194388770565643
Batch 7350:0.5936780571937561,0.5113525672951424
Batch 7400:0.3533867597579956,0.5120763128701298
Batch 7450:0.500450849533081,0.514983253922148
Batch 7500:0.6001191139221191,0.5197977747576322
Batch 7550:0.4414985477924347,0.5108791618496595
Batch 7600:0.47784167528152466,0.511500274612454
Batch 7650:0.6313395500183105,0.5135009335753875
Batch 7700:0.4201299846172333,0.5044276278567247
Batch 7750:0.43057698011398315,0.49498519543880104
Batch 7800:0.421235591173172,0.49373594852094355
Batch 7850:0.5850293040275574,0.4874337910059817
Batch 7900:0.4728736877441406,0.49203616230162417
Batch 7950:0.5296165943145752,0.4921089781533269
Batch 8000:0.5693755745887756,0.4903399768488542
Batch 8050:0.4520331025123596,0.4980184213887254
Batch 8100:0.6577197909355164,0.4839749393634911
Batch 8150:0.42644789814949036,0.4894406397049729
Batch 8200:0.3823671042919159,0.4833629030072527
Batch 8250:0.43571120500564575,0.4697571965284362
Batch 8300:0.3432028591632843,0.4687206808578902
Batch 8350:0.4749666452407837,0.4682180119457685
Batch 8400:0.2713819444179535,0.456227908798264
###Markdown
Test with example reveiw
###Code
review_title = "Good stuff"
review = "This album is definitely better than the previous one"
print(review_title)
print(review + '\n')
encoded = nd.array([encode(review + " | " + review_title)], ctx=ctx)
output = net(encoded)
softmax = nd.exp(output) / nd.sum(nd.exp(output))[0]
predicted = categories[np.argmax(output[0].asnumpy())]
print('Predicted: {}\n'.format(predicted))
for i, val in enumerate(categories):
print(val, float(int(softmax[0][i].asnumpy()*1000)/10), '%')
###Output
_____no_output_____ |
Winternship_covid19.ipynb | ###Markdown
P_t = P_o * (e^(r*t))
###Code
a = d2['total_count'].pct_change(fill_method ='ffill')
r = a.mean(axis = 0)
P_o = 31
t = 26
P_t= P_o*(pow(2.718281828459045,t*r))
P_t
###Output
_____no_output_____ |
notes/23-cloud-computing.ipynb | ###Markdown
Cloud Computing**Definition**: Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and [economies of scale](https://en.wikipedia.org/wiki/Economies_of_scale).[-Wikipedia](https://en.wikipedia.org/wiki/Cloud_computing) Architecture**Cloud computing metaphor**: the group of networked elements providing services need not be individually addressed or managed by users; instead, the entire provider-managed suite of hardware and software can be thought of as an **amorphous cloud**.Googles Colab Notebooks is an example of application based cloud computing (for that matter their whole suite of application falls into that category). Service Models * [Software as a service (SaaS)](https://en.wikipedia.org/wiki/Cloud_computingSoftware_as_a_service_(SaaS)) **Definition**: The capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. **Examples**: Google Docs and Colab Notebooks * [Platform as a service (PaaS)](https://en.wikipedia.org/wiki/Cloud_computingPlatform_as_a_service_(PaaS)) **Definition**: The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. **Examples**: AWS Sagemaker and S3 * [Infrastructure as a service (IaaS)](https://en.wikipedia.org/wiki/Cloud_computingInfrastructure_as_a_service_(IaaS)) **Definition**: The consumer is able to deploy and run arbitrary software, which can include operating systems and applications and has control over operating systems, storage, and deployed applications. **Examples**: AWS and Azure Data Science in the Cloud A cloud-based architecture of a data science processing pipeline taking advantage of AWS' IaaS. All the components can be provisioned and configured in AWS console or through their DevOps API. ([Source](https://aws.amazon.com/blogs/big-data/big-data-analytics-options-on-aws-updated-white-paper/))We will take a look at two components in the above diagram: * Cloud-based Storage: [S3](https://aws.amazon.com/s3/) (component 3) Amazon Simple Storage Service (Amazon S3) is an object storage service that offers scalability, data availability, security, and performance. * Cloud-based Machine Learning: [Sagemaker](https://aws.amazon.com/sagemaker/) (component 5) Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. ExperimentsIf you have an AWS account here are two nice tutorials* [s3 tutorial](https://aws.amazon.com/getting-started/hands-on/backup-files-to-amazon-s3/)* [Jupyter/Sagemaker](https://aws.amazon.com/getting-started/hands-on/build-train-deploy-machine-learning-model-sagemaker/)We will be doing work in AWS Classrooms. Exercise 11. Goto the S3 console.1. Create a bucket.1. Upload the 'tennis-numeric.csv' file into your bucket.1. Using the SQL interface figure out what the average temperature is for the week. * Make sure you ticked off the 'File has header row' box. * Query: select avg(cast(temperature as integer)) from s3object * Note that you have to explicitly cast the temperature column as integer! 1. Using the SQL interface figure out how often we play or not play tennis in that week. * Query: select count(play) from s3object where cast(play as string) like '%yes%' * Note that you have to use 'like' with wildcard characters '%' in order to match 'yes' in case there are hidden characters. Exercise 21. Goto the Sagemaker notebooks console.1. Create a notebook instance and open it in the Jupyter console.1. Create a notebook to do the following: 1. Access your play tennis data set stored in S3. 1. Build a decision tree 1. Print the tree and its accuracy The following code snippets will be useful for the above exercisesCut and paste them into your Sagemaker notebook ```Python Accessing buckets for Machine Learning in Sagemakerimport s3fsimport pandas as pddf = pd.read_csv('s3:///.csv')df.head()``` ```Python print decision treeimport operatordef tree_print(clf, X): tlevel = _tree_rprint('', clf, X.columns, clf.classes_) print('<',end='') for i in range(3*tlevel - 2): print('-',end='') print('>') print('Tree Depth: ',tlevel)def _tree_rprint(kword, clf, features, labels, node_index=0, tlevel_index=0): for i in range(tlevel_index): print(' |',end='') if clf.tree_.children_left[node_index] == -1: indicates leaf print(kword, end=' ' if kword else '') get the majority label count_list = clf.tree_.value[node_index, 0] if len(count_list) == 1: regression problem print(count_list[0]) else: get the majority label max_index, max_value = max(enumerate(count_list), key=operator.itemgetter(1)) max_label = labels[max_index] print(max_label) return tlevel_index else: compute and print node label feature = features[clf.tree_.feature[node_index]] threshold = clf.tree_.threshold[node_index] print(kword, end=' ' if kword else '') print('if {} =< {}: '.format(feature, threshold)) recurse down the children left_index = clf.tree_.children_left[node_index] right_index = clf.tree_.children_right[node_index] ltlevel_index = _tree_rprint('then', clf, features, labels, left_index, tlevel_index+1) rtlevel_index = _tree_rprint('else', clf, features, labels, right_index, tlevel_index+1) return the maximum depth of either one of the children return max(ltlevel_index,rtlevel_index)``` ```Pythonfrom sklearn import treefrom sklearn.metrics import accuracy_score set up dataX = df.drop(['play'],axis=1)y = df['play'] set up the tree model object - limit the complexity to put us somewhere in the middle of the graph.model = tree.DecisionTreeClassifier(criterion='entropy', max_depth=None) fit the model on the training set of datamodel.fit(X, y) evaluate modeltree_print(model,X)y_model = model.predict(X)acc = accuracy_score(y, y_model)print("Accuracy: {:3.2f}".format(acc))``` Exercise 3Query your bucket from your Sagemaker notebook and answer the same questions from exercise 2.The following snippets will be helpful. ```Pythonimport pandas as pdimport boto3from io import StringIOdef query_bucket(sql, bucket, key): ''' Query an S3 bucket using 'Select From SQL' syntax. If data was found then return a dataframe otherwise return 'None'. ''' s3 = boto3.client('s3') resp = s3.select_object_content( Bucket=bucket, Key=key, ExpressionType='SQL', Expression=sql, InputSerialization = {'CSV': {"FileHeaderInfo": "Use"}, 'CompressionType': 'NONE'}, OutputSerialization = {'CSV': {}}, ) event_stream = resp['Payload'] for event in event_stream: if 'Records' in event: data_in = StringIO(str(event['Records']['Payload'].decode("utf-8"))) df = pd.read_csv(data_in, header=None) return df return None``` ```Python launch a querysql = "select temperature,play from s3object"df = query_bucket(sql, "", "")print(df)```
###Code
###Output
_____no_output_____ |
writing_efficient_code/foundations.ipynb | ###Markdown
Python Standard LibraryThe built in library for Python is one of its strengths, and ships with a number of built in types that are used constantly throughout any type of project. Alongside these are the number of functions as well that are built specifically to work within a Python environment.
###Code
object_to_apply_function_on = [[1,2,2,1], [3,4], [5,6]]
# Map is one of Python's built-in functions
print(list(map(len, object_to_apply_function_on)))
print(list(map(lambda x: sum(x)/len(x), object_to_apply_function_on)))
###Output
[1.5, 3.5, 5.5]
###Markdown
Map takes a function that you are to apply to every element in a given object. The second example here showing how we can use a defined `lambda` function within `map` as well.
###Code
print(range(0,10))
[*range(0,10)]
###Output
range(0, 10)
###Markdown
You can see above that we used the unpacking syntax for `range` to unpack the `range` object into a list; we specify a list by the square brackets. NumPy ArraysThey are a memory efficient alternative to the python `list`. They are homogenous, meaning that they must contain all elements of the same data type. It is this homogenous nature that allows for the removal of the type checking that is present in Python's base `list`.
###Code
import numpy as np
list_a = [1,2,3,4,5,6]
array_a = np.array(list_a)
print(list_a)
print(array_a)
# Broadcasting with numpy arrays
array_a * 4
list_a * 4
###Output
_____no_output_____
###Markdown
Here we can see that `np.array` objects vectorise the process, whereas the `lists` multiplies the list and appends them together.
###Code
array_a ** 3
list_a ** 3
[x ** 3 for x in list_a]
###Output
_____no_output_____ |
source/.ipynb_checkpoints/Test_Short_Lorenz-checkpoint.ipynb | ###Markdown
Dataset generation
###Code
## second u0 is off-phase from the first by half a Lyapunov time
u0 = np.array([7.432487609628195, 10.02071718705213, 29.62297428638419])
dt = 0.01
t_lyap = 0.9**(-1)
N_lyap = int(t_lyap/dt) # number of time steps in one Lyapunov time
# number of time steps for washout, train, validation and PH window
N_washout = 1*N_lyap
N_train = 8*N_lyap
N_val = 3*N_lyap
N_tstart = 24*N_lyap #start for the test set
N_test = 500*N_lyap
norm = np.array([35.23020746, 45.09776766, 36.07598481])
# generate data
U = solve_ode(N_washout+N_train+N_val+N_test, dt, u0)
# washout
U_washout = U[:N_washout]
# training
U_t = U[N_washout:N_washout+N_train-1]
Y_t = U[N_washout+1:N_washout+N_train]
# training + validation
U_tv = U[N_washout:N_washout+N_train+N_val-1]
Y_tv = U[N_washout+1:N_washout+N_train+N_val]
# validation
Y_val = U[N_washout+N_train:N_washout+N_train+N_val]
###Output
_____no_output_____
###Markdown
Import Results From Optimization Runs
###Code
model_informed = True
if model_informed:
string = '_MI.h5'
%run ./Functions_MI.ipynb
else:
string = '.h5'
#BO and Grid Search in SSV
hf = h5py.File('./data/Lor_short_SSV_50_5' + str(string),'r')
Min_25G = np.array(hf.get('minimum'))
hf.close()
hf = h5py.File('./data/Lor_short_SSV_50_7' + str(string),'r')
Min_50G = np.array(hf.get('minimum'))
hf.close()
# #BO and Grid Search in RVC
hf = h5py.File('./data/Lor_short_RVC_50_5' + str(string),'r')
Min_25G_mmv = np.array(hf.get('minimum'))
hf.close()
hf = h5py.File('./data/Lor_short_RVC_50_7' + str(string),'r')
Min_50G_mmv = np.array(hf.get('minimum'))
hf.close()
# BO and Grid Search in WFV
hf = h5py.File('./data/Lor_short_WFV_50_5' + str(string),'r')
Min_25G_wfv = np.array(hf.get('minimum'))
hf.close()
hf = h5py.File('./data/Lor_short_WFV_50_7' + str(string),'r')
Min_50G_wfv = np.array(hf.get('minimum'))
hf.close()
#BO and Grid Search in KFV
hf = h5py.File('./data/Lor_short_KFV_50_5' + str(string),'r')
Min_25G_kfv = np.array(hf.get('minimum'))
hf.close()
hf = h5py.File('./data/Lor_short_KFV_50_7' + str(string),'r')
Min_50G_kfv = np.array(hf.get('minimum'))
hf.close()
#BO and Grid Search in RV
hf = h5py.File('./data/Lor_short_RV_50_5' + str(string),'r')
Min_25G_mv = np.array(hf.get('minimum'))
hf.close()
hf = h5py.File('./data/Lor_short_RV_50_7' + str(string),'r')
Min_50G_mv = np.array(hf.get('minimum'))
hf.close()
#BO and Grid Search in KFC
hf = h5py.File('./data/Lor_short_KFC_50_5' + str(string),'r')
Min_25G_kfo = np.array(hf.get('minimum'))
hf.close()
hf = h5py.File('./data/Lor_short_KFC_50_7' + str(string),'r')
Min_50G_kfo = np.array(hf.get('minimum'))
hf.close()
#BO and Grid Search WFC
hf = h5py.File('./data/Lor_short_WFC_50_5' + str(string),'r')
Min_25G_wfc = np.array(hf.get('minimum'))
hf.close()
hf = h5py.File('./data/Lor_short_WFC_50_7' + str(string),'r')
Min_50G_wfc = np.array(hf.get('minimum'))
hf.close()
###Output
_____no_output_____
###Markdown
ESN Initiliazation Parameters
###Code
bias_in = 1.0 #input bias
bias_out = 1.0 #output bias
N_units = 100 #units in the reservoir
dim = 3 # dimension of inputs (and outputs)
connectivity = 3
sigma_in = 1.0 #input scaling
rho = 1.0 # spectral radius
sparseness = 1 - connectivity/(N_units-1)
tikh = 1e-11 # Tikhonov factor
# Functions in the test set
def f(x):
"""Computes MSE and PH in the test set of N_test points"""
global rho, sigma_in
rho = x[0]
sigma_in = x[1]
#Train on the entire dataset
Wout = train(U_washout, U_tv, Y_tv, tikh)[1]
N_test = 100 # N_test intervals
Mean_MSE = 0
Mean_PH = 0
kk = 5 # the PH is computed up to kk*N_val for each interval
#Different Folds in the cross-validation
for i in range(N_test):
# data for washout and target in each interval
U_wash = U[N_tstart - N_washout +i*N_val : N_tstart+i*N_val]
Y_t_MSE = U[N_tstart+i*N_val:N_tstart+i*N_val+N_val]
Y_t_PH = U[N_tstart+i*N_val:N_tstart+i*N_val+kk*N_val]
#washout for each interval
xa1 = open_loop(U_wash, np.zeros(N_units))[-1]
# Mean Square Error
Yh_t_MSE = closed_loop(N_val-1, xa1, Wout)[0]
Mean1 = np.log10(np.mean(((Yh_t_MSE-Y_t_MSE))**2))
# np.isnan because trajectory may become unbounded in model-informed architecture
if np.isnan(Mean1):
Mean1 = 10
Mean_MSE += Mean1
# Prediction Horizon
Mean_PH += predictability_horizon(xa1,Y_t_PH,Wout)
return Mean_MSE/N_test, Mean_PH/N_test
###Output
_____no_output_____
###Markdown
Run Ensemble in the test set
###Code
%%time
#Compute Ensemble
ensemble = 50
# Initialize the arrays
res_Gr = np.zeros((ensemble,2))
res_BO = np.zeros((ensemble,2))
res_Gr_mmv = np.zeros((ensemble,2))
res_BO_mmv = np.zeros((ensemble,2))
res_Gr_wfv = np.zeros((ensemble,2))
res_BO_wfv = np.zeros((ensemble,2))
res_Gr_kfv = np.zeros((ensemble,2))
res_BO_kfv = np.zeros((ensemble,2))
res_Gr_mv = np.zeros((ensemble,2))
res_BO_mv = np.zeros((ensemble,2))
res_Gr_kfo = np.zeros((ensemble,2))
res_BO_kfo = np.zeros((ensemble,2))
res_Gr_wfc = np.zeros((ensemble,2))
res_BO_wfc = np.zeros((ensemble,2))
for i in range(ensemble):
print('Ensemble :',i+1)
# Win and W generation
seed= i+1
rnd = np.random.RandomState(seed)
Win = np.zeros((dim+1, N_units))
for j in range(N_units):
Win[rnd.randint(0, dim+1),j] = rnd.uniform(-1, 1) #only one element different from zero per row
# practical way to set the sparseness
W = rnd.uniform(-1, 1, (N_units, N_units)) * (rnd.rand(N_units, N_units) < (1-sparseness))
spectral_radius = np.max(np.abs(np.linalg.eigvals(W)))
W /= spectral_radius #scaled to have unitary spec radius
#Compute the performance in the test set for the best hyperparameters
temp = f(Min_50G[i,:2])
res_Gr[i,0] = temp[0]
res_Gr[i,1] = temp[1]
temp = f(Min_25G[i,:2])
res_BO[i,0] = temp[0]
res_BO[i,1] = temp[1]
temp = f(Min_50G_mmv[i,:2])
res_Gr_mmv[i,0] = temp[0]
res_Gr_mmv[i,1] = temp[1]
temp = f(Min_25G_mmv[i,:2])
res_BO_mmv[i,0] = temp[0]
res_BO_mmv[i,1] = temp[1]
temp = f(Min_50G_wfv[i,:2])
res_Gr_wfv[i,0] = temp[0]
res_Gr_wfv[i,1] = temp[1]
temp = f(Min_25G_wfv[i,:2])
res_BO_wfv[i,0] = temp[0]
res_BO_wfv[i,1] = temp[1]
temp = f(Min_50G_kfv[i,:2])
res_Gr_kfv[i,0] = temp[0]
res_Gr_kfv[i,1] = temp[1]
temp = f(Min_25G_kfv[i,:2])
res_BO_kfv[i,0] = temp[0]
res_BO_kfv[i,1] = temp[1]
temp = f(Min_50G_kfo[i,:2])
res_Gr_kfo[i,0] = temp[0]
res_Gr_kfo[i,1] = temp[1]
temp = f(Min_25G_kfo[i,:2])
res_BO_kfo[i,0] = temp[0]
res_BO_kfo[i,1] = temp[1]
temp = f(Min_50G_mv[i,:2])
res_Gr_mv[i,0] = temp[0]
res_Gr_mv[i,1] = temp[1]
temp = f(Min_25G_mv[i,:2])
res_BO_mv[i,0] = temp[0]
res_BO_mv[i,1] = temp[1]
temp = f(Min_50G_wfc[i,:2])
res_Gr_wfc[i,0] = temp[0]
res_Gr_wfc[i,1] = temp[1]
temp = f(Min_25G_wfc[i,:2])
res_BO_wfc[i,0] = temp[0]
res_BO_wfc[i,1] = temp[1]
#Save Results
fln = './data/Short_Lorenz_PostProc' + str(string)
hf = h5py.File(fln,'w')
hf.create_dataset('res_Gr ',data=res_Gr)
hf.create_dataset('res_BO ',data=res_BO)
hf.create_dataset('res_Gr_mmv ',data=res_Gr_mmv)
hf.create_dataset('res_BO_mmv ',data=res_BO_mmv)
hf.create_dataset('res_Gr_mv ',data=res_Gr_mv)
hf.create_dataset('res_BO_mv ',data=res_BO_mv)
hf.create_dataset('res_Gr_kfv ',data=res_Gr_kfv)
hf.create_dataset('res_BO_kfv ',data=res_BO_kfv)
hf.create_dataset('res_Gr_kfo ',data=res_Gr_kfo)
hf.create_dataset('res_BO_kfo ',data=res_BO_kfo)
hf.create_dataset('res_Gr_wfv ',data=res_Gr_wfv)
hf.create_dataset('res_BO_wfv ',data=res_BO_wfv)
hf.create_dataset('res_Gr_wfc ',data=res_Gr_wfc)
hf.create_dataset('res_BO_wfc ',data=res_BO_wfc)
hf.close()
###Output
_____no_output_____
###Markdown
Run Ensmeble in the test set with fixed hyperparametersWe compute the performance of the ensemble using the same hyperparameters for all the networks.The hyperparameters used are the ones obtained through Bayesian Optimization in the first network using chaotic Recycle Validation and chaotic K-Fold Validation.
###Code
%%time
model_informed = False
if model_informed:
%run ./Functions_MI.ipynb
#Compute Ensemble
ensemble = 50
# Initialize the arrays
res_BO_mmv_fix = np.zeros((ensemble,2))
res_BO_kfo_fix = np.zeros((ensemble,2))
for i in range(ensemble):
print('Ensemble :',i+1)
# Win and W generation
seed= i+1
rnd = np.random.RandomState(seed)
Win = np.zeros((dim+1, N_units))
for j in range(N_units):
Win[rnd.randint(0, dim+1),j] = rnd.uniform(-1, 1) #only one element different from zero per row
# practical way to set the sparseness
W = rnd.uniform(-1, 1, (N_units, N_units)) * (rnd.rand(N_units, N_units) < (1-sparseness))
spectral_radius = np.max(np.abs(np.linalg.eigvals(W)))
W /= spectral_radius #scaled to have unitary spec radius
#Compute the performance in the test set for the best hyperparameters
temp = f(Min_25G_mmv[0,:2])
res_BO_mmv_fix[i,0] = temp[0]
res_BO_mmv_fix[i,1] = temp[1]
temp = f(Min_25G_kfo[0,:2])
res_BO_kfo_fix[i,0] = temp[0]
res_BO_kfo_fix[i,1] = temp[1]
#Save Results
fln = './data/Short_Lorenz_PostProc_fix.h5'
hf = h5py.File(fln,'w')
hf.create_dataset('res_BO_mmv_fix',data=res_BO_mmv_fix)
hf.create_dataset('res_BO_kfo_fix',data=res_BO_kfo_fix)
hf.close()
###Output
_____no_output_____
###Markdown
Comparison SSV in validation and 12 to 15 LTs
###Code
%%time
from skopt.plots import plot_convergence
# We use only one interval for the test set (12 to 15LTs) to compute the MSE
# Test1 is implemented in Functions.ipynb
#Compute Ensemble
ensemble = 50
#Computing
res_Gr1 = np.zeros((ensemble,1))
res_BO1 = np.zeros((ensemble,1))
for i in range(ensemble):
# Win and W generation
seed =i+1
rnd = np.random.RandomState(seed)
Win = np.zeros((dim+1, N_units))
for j in range(N_units):
Win[rnd.randint(0, dim+1),j] = rnd.uniform(-1, 1)
W = rnd.uniform(-1, 1, (N_units, N_units)) * (rnd.rand(N_units, N_units) < (1-sparseness))
spectral_radius = np.max(np.abs(np.linalg.eigvals(W)))
W /= spectral_radius
#
temp = Test1(Min_50G[i,:2])
res_Gr1[i,0] = 10**temp
temp = Test1(Min_25G[i,:2])
res_BO1[i,0] = 10**temp
fln = './data/Short_Lorenz_PostProc2.h5'
hf = h5py.File(fln,'w')
hf.create_dataset('res_Gr1 ',data=res_Gr1)
hf.create_dataset('res_BO1 ',data=res_BO1)
hf.create_dataset('Min_25G ',data=10**Min_25G)
hf.create_dataset('Min_50G ',data=10**Min_50G)
hf.close()
## Model-informed
###Output
_____no_output_____ |
geo/.ipynb_checkpoints/options-first-run-geocode-checkpoint.ipynb | ###Markdown
Full geocode re-run
###Code
df=pd.read_csv('magyar.csv',sep='|')
from pygeocoder import Geocoder
apikey='AIzaSyB7joM_loHFb1SYFJevWfMmBCD9VO2uykc'
df.columns
hun_geo={}
errors=[]
for i in df.index:
falu=df.loc[i]['falu']
megye=str(df.loc[i]['megye'])
if falu=='Mihai Viteazu, Alsó- és Felsőszentmihály ':
falu='Felsőszentmihály'
megye='Felsőszentmihály, CJ'
if megye=='Oras intorsura ':megye='Oras intorsura, CV'
if megye=='Municipiul Brasov':megye='Municipiul Brasov, BV'
if megye=='Municipiul Resita CS':megye='Municipiul Resita, CS'
if megye=='Sanmihaiu de ':megye='Sanmihaiu de Campie, BN'
if falu not in ['Buzaului, CV','Csegőd + Horindzsapuszta + Irtáspuszta','I. G. Duca = Plopi',
'meggyesi rét','Parau-tepeseni, Bicazu Ardelean ','Ardelean, NT','Campie, BN',
'Sajómagyarósi völgy','Gábor Áron vasüzem']:
if megye not in ['Mihai Viteazu, CJ']:
kozseg, m=megye.split(',')
hun=df.loc[i]['hun']
to_geo=falu+', '+megye+', Romania'
if hun not in hun_geo:
try:
coords=Geocoder(apikey).geocode(to_geo).coordinates
except:
errors.append(df.loc[i])
print(i)
hun_geo[hun]={'coords':coords,'falu':falu,'kozseg':kozseg, 'megye':m}
open('hun_geo.json','w').write(json.dumps(hun_geo))
import numpy as np
for i in hun_geo:
for j in ['falu', 'kozseg', 'megye']:
hun_geo[i][j]=hun_geo[i][j].strip()
huncoords={}
for i in hun_geo:
huncoords[str(i).strip()]=hun_geo[i]['coords']
open('huncoords.json','w').write(json.dumps(huncoords))
def get_megye(cd):
megyek={'HR':'Hargita',
'CV':'Kovászna',
'MS':'Maros',
'BV':'Brassó',
'NT':'Neamț'}
if cd in megyek:return megyek[cd]
else:return 'Más megye'
m_geo={}
for i in hun_geo:
megye=get_megye(hun_geo[i]['megye'])
if megye not in m_geo:m_geo[megye]=[]
m_geo[megye].append(str(i).strip())
for i in m_geo:
m_geo[i]=np.sort(m_geo[i])
m_geo.keys()
s=''
for m in ['Hargita', 'Kovászna', 'Maros', 'Brassó', 'Neamț', 'Más megye']:
s+='\n'+m
for f in m_geo[m]:
s+='\n '+f
s+=' Más település'
open('droplist.txt','w',encoding='utf8').write(s)
s=open('droplist.txt','r',encoding='utf8').read()
megyen=''
megyek={}
for i in s.split('\n'):
if len(i)>0:
if (' ')!=i[0]:
megyen=i
megyek[i.strip()]=megyen
open('megyek.json','w').write(json.dumps(megyek))
###Output
_____no_output_____ |
slides/2_21/read-write.ipynb | ###Markdown
File I/O Save & Load NDArray
###Code
from mxnet import nd
from mxnet.gluon import nn
x = nd.arange(4)
nd.save('x-file', x)
x2 = nd.load('x-file')
x2
###Output
_____no_output_____
###Markdown
Save & Load a List of Arrays
###Code
y = nd.zeros(4)
nd.save('x-files', [x, y])
x2, y2 = nd.load('x-files')
(x2, y2)
###Output
_____no_output_____
###Markdown
Save & Load a Dictionary of Arrays
###Code
mydict = {'x': x, 'y': y}
nd.save('mydict', mydict)
mydict2 = nd.load('mydict')
mydict2
###Output
_____no_output_____
###Markdown
Gluon Model Parameters
###Code
class MLP(nn.Block):
def __init__(self, **kwargs):
super(MLP, self).__init__(**kwargs)
self.hidden = nn.Dense(256, activation='relu')
self.output = nn.Dense(10)
def forward(self, x):
return self.output(self.hidden(x))
net = MLP()
net.initialize()
x = nd.random.uniform(shape=(2, 20))
y = net(x)
###Output
_____no_output_____
###Markdown
Save
###Code
net.save_parameters('mlp.params')
###Output
_____no_output_____
###Markdown
Load
###Code
clone = MLP()
clone.load_parameters('mlp.params')
yclone = clone(x)
yclone == y
###Output
_____no_output_____ |
examples/NetworkX Example.ipynb | ###Markdown
Undirected graph
###Code
G = nx.complete_graph(5)
undirected = ipycytoscape.CytoscapeWidget()
undirected.graph.add_graph_from_networkx(G)
undirected
###Output
_____no_output_____
###Markdown
You can also add more nodes The above graph should update when you run the next cell
###Code
G2 = nx.Graph()
G2.add_node('separate node 1')
G2.add_node('separate node 2')
G2.add_edge('separate node 1', 'separate node 2')
undirected.graph.add_graph_from_networkx(G2)
###Output
_____no_output_____
###Markdown
Fully directed graphs`add_graph_from_networkx` takes an argument `directed` that if True will ensure all edges given the directed class, which will style them with an arrow.
###Code
G = nx.complete_graph(5)
directed = ipycytoscape.CytoscapeWidget()
directed.graph.add_graph_from_networkx(G, directed=True)
directed
###Output
_____no_output_____
###Markdown
Mixed graphsYou can also make graphs with both directed and undirected edges by adding 'directed' to the 'classes' attribute of the edge data
###Code
from random import random
G = nx.complete_graph(5)
for s, t, data in G.edges(data=True):
if random() > .5:
G[s][t]['classes'] = 'directed'
mixed = ipycytoscape.CytoscapeWidget()
mixed.graph.add_graph_from_networkx(G)
mixed
###Output
_____no_output_____
###Markdown
Custom networkx Node ObjectsThe most common choices for Nodes in networkx are numbers or strings as shown above. A node can also be any hashable object (except None) which work as well.
###Code
class Node:
def __init__(self, name):
self.name = name
def __str__(self):
return "Node: " + str(self.name)
n1 = Node("node 1")
n2 = Node("node 2")
G = nx.Graph()
G.add_node(n1)
G.add_node(n2)
G.add_edge(n1, n2)
w = ipycytoscape.CytoscapeWidget()
w.graph.add_graph_from_networkx(G)
w
###Output
_____no_output_____
###Markdown
Custom networkx Node Objects that inherit from ipycytoscape.NodeWhile custom networkx Node objects work, they do not allow as much control over formatting as you may need. The easiest way to achieve customization with custom Node objects is to subclass ipycytoscape.Node as show below.
###Code
class CustomNode(ipycytoscape.Node):
def __init__(self, name, classes=''):
super().__init__()
self.data['id'] = name
self.classes = classes
n1 = CustomNode("node 1", classes='class1')
n2 = CustomNode("node 2", classes='class2')
G = nx.Graph()
G.add_node(n1)
G.add_node(n2)
G.add_edge(n1, n2)
custom_inherited = ipycytoscape.CytoscapeWidget()
custom_inherited.graph.add_graph_from_networkx(G)
custom_inherited.set_style([
{
'selector': 'node.class1',
'css': {
'background-color': 'red'
}
},
{
'selector': 'node.class2',
'css': {
'background-color': 'green'
}
}])
custom_inherited
###Output
_____no_output_____
###Markdown
NetworkX
###Code
import ipycytoscape
import ipywidgets as widgets
import networkx as nx
###Output
_____no_output_____
###Markdown
Undirected graph
###Code
G = nx.complete_graph(5)
undirected = ipycytoscape.CytoscapeWidget()
undirected.graph.add_graph_from_networkx(G)
display(undirected)
###Output
_____no_output_____
###Markdown
You can also add more nodes The above graph should update when you run the next cell
###Code
G2 = nx.Graph()
G2.add_node('separate node 1')
G2.add_node('separate node 2')
G2.add_edge('separate node 1', 'separate node 2')
undirected.graph.add_graph_from_networkx(G2)
###Output
_____no_output_____
###Markdown
Fully directed graphs`add_graph_from_networkx` takes an argument `directed` that if True will ensure all edges given the directed class, which will style them with an arrow.
###Code
G = nx.complete_graph(5)
directed = ipycytoscape.CytoscapeWidget()
directed.graph.add_graph_from_networkx(G, directed=True)
directed
###Output
_____no_output_____
###Markdown
Mixed graphsYou can also make graphs with both directed and undirected edges by adding 'directed' to the 'classes' attribute of the edge data
###Code
from random import random
G = nx.complete_graph(5)
for s, t, data in G.edges(data=True):
if random() > .5:
G[s][t]['classes'] = 'directed'
mixed = ipycytoscape.CytoscapeWidget()
mixed.graph.add_graph_from_networkx(G)
mixed
###Output
_____no_output_____
###Markdown
Custom networkx Node ObjectsThe most common choices for Nodes in networkx are numbers or strings as shown above. A node can also be any hashable object (except None) which work as well.
###Code
class Node:
def __init__(self, name):
self.name = name
def __str__(self):
return "Node: " + str(self.name)
n1 = Node("node 1")
n2 = Node("node 2")
G = nx.Graph()
G.add_node(n1)
G.add_node(n2)
G.add_edge(n1, n2)
w = ipycytoscape.CytoscapeWidget()
w.graph.add_graph_from_networkx(G)
w
###Output
_____no_output_____
###Markdown
Custom networkx Node Objects that inherit from ipycytoscape.NodeWhile custom networkx Node objects work, they do not allow as much control over formatting as you may need. The easiest way to achieve customization with custom Node objects is to subclass ipycytoscape.Node as show below.
###Code
class CustomNode(ipycytoscape.Node):
def __init__(self, name, classes=''):
super().__init__()
self.data['id'] = name
self.classes = classes
n1 = CustomNode("node 1", classes='class1')
n2 = CustomNode("node 2", classes='class2')
G = nx.Graph()
G.add_node(n1)
G.add_node(n2)
G.add_edge(n1, n2)
custom_inherited = ipycytoscape.CytoscapeWidget()
custom_inherited.graph.add_graph_from_networkx(G)
custom_inherited.set_style([
{
'selector': 'node.class1',
'css': {
'background-color': 'red'
}
},
{
'selector': 'node.class2',
'css': {
'background-color': 'green'
}
}])
custom_inherited
###Output
_____no_output_____
###Markdown
NetworkX
###Code
import ipycytoscape
import ipywidgets as widgets
import networkx as nx
###Output
_____no_output_____
###Markdown
Undirected graph
###Code
G = nx.complete_graph(5)
undirected = ipycytoscape.CytoscapeWidget()
undirected.graph.add_graph_from_networkx(G)
display(undirected)
###Output
_____no_output_____
###Markdown
You can also add more nodes The above graph should update when you run the next cell
###Code
G2 = nx.Graph()
G2.add_node('separate node 1')
G2.add_node('separate node 2')
G2.add_edge('separate node 1', 'separate node 2')
undirected.graph.add_graph_from_networkx(G2)
###Output
_____no_output_____
###Markdown
Fully directed graphs`add_graph_from_networkx` takes an argument `directed` that if True will ensure all edges given the directed class, which will style them with an arrow.
###Code
G = nx.complete_graph(5)
directed = ipycytoscape.CytoscapeWidget()
directed.graph.add_graph_from_networkx(G, directed=True)
directed
###Output
_____no_output_____
###Markdown
Mixed graphsYou can also make graphs with both directed and undirected edges by adding 'directed' to the 'classes' attribute of the edge data
###Code
from random import random
G = nx.complete_graph(5)
for s, t, data in G.edges(data=True):
if random() > .5:
G[s][t]['classes'] = 'directed'
mixed = ipycytoscape.CytoscapeWidget()
mixed.graph.add_graph_from_networkx(G)
mixed
###Output
_____no_output_____
###Markdown
Custom networkx Node ObjectsThe most common choices for Nodes in networkx are numbers or strings as shown above. A node can also be any hashable object (except None) which work as well.
###Code
class Node:
def __init__(self, name):
self.name = name
def __str__(self):
return "Node: " + str(self.name)
n1 = Node("node 1")
n2 = Node("node 2")
G = nx.Graph()
G.add_node(n1)
G.add_node(n2)
G.add_edge(n1, n2)
w = ipycytoscape.CytoscapeWidget()
w.graph.add_graph_from_networkx(G)
w
###Output
_____no_output_____
###Markdown
Custom networkx Node Objects that inherit from ipycytoscape.NodeWhile custom networkx Node objects work, they do not allow as much control over formatting as you may need. The easiest way to achieve customization with custom Node objects is to subclass ipycytoscape.Node as show below.
###Code
class CustomNode(ipycytoscape.Node):
def __init__(self, name, classes=''):
super().__init__()
self.data['id'] = name
self.classes = classes
n1 = CustomNode("node 1", classes='class1')
n2 = CustomNode("node 2", classes='class2')
G = nx.Graph()
G.add_node(n1)
G.add_node(n2)
G.add_edge(n1, n2)
custom_inherited = ipycytoscape.CytoscapeWidget()
custom_inherited.graph.add_graph_from_networkx(G)
custom_inherited.set_style([
{
'selector': 'node.class1',
'css': {
'background-color': 'red'
}
},
{
'selector': 'node.class2',
'css': {
'background-color': 'green'
}
}])
custom_inherited
###Output
_____no_output_____
###Markdown
Undirected graph
###Code
G = nx.complete_graph(5)
undirected = ipycytoscape.CytoscapeWidget()
undirected.graph.add_graph_from_networkx(G)
undirected
###Output
_____no_output_____
###Markdown
You can also add more nodes The above graph should update when you run the next cell
###Code
G2 = nx.Graph()
G2.add_node('separate node 1')
G2.add_node('separate node 2')
G2.add_edge('separate node 1', 'separate node 2')
undirected.graph.add_graph_from_networkx(G2)
###Output
_____no_output_____
###Markdown
Fully directed graphs`add_graph_from_networkx` takes an argument `directed` that if True will ensure all edges given the directed class, which will style them with an arrow.
###Code
G = nx.complete_graph(5)
directed = ipycytoscape.CytoscapeWidget()
directed.graph.add_graph_from_networkx(G, directed=True)
directed
###Output
_____no_output_____
###Markdown
Mixed graphsYou can also make graphs with both directed and undirected edges by adding 'directed' to the 'classes' attribute of the edge data
###Code
from random import random
G = nx.complete_graph(5)
for s, t, data in G.edges(data=True):
if random() > .5:
G[s][t]['classes'] = 'directed'
mixed = ipycytoscape.CytoscapeWidget()
mixed.graph.add_graph_from_networkx(G)
mixed
###Output
_____no_output_____ |
testing/valid/Y2019M09D12_RH_Generate_Random_Valid_Addresses_V01.ipynb | ###Markdown
Generate a .csv file with an arbitrary number of valid addresses.
###Code
!pip install geopy geopandas
!apt install python3-rtree
import getpass
import datetime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import geopandas as gpd
from geopy.geocoders import GoogleV3
initiate = True
if initiate:
api_key = getpass.getpass("Google API key: ")
else:
pass
# We do not know how many locations are randomly sampled on land.
# Appr. 1/3 of the world is land and we exclude antarctica. Therefor we multiply
# by 6 (safe side).
desired_sample_size = 100
sample_size = 6*desired_sample_size
def scale_lat(rand):
return rand*180-90
def scale_lon(rand):
return rand*360-180
def get_random_dataframe(sample_size):
lats = list(map(scale_lat,np.random.rand(sample_size)))
lons = list(map(scale_lon,np.random.rand(sample_size)))
df = pd.DataFrame(data={"lat":lats,"lon":lons})
gdf = gpd.GeoDataFrame(
df, geometry=gpd.points_from_xy(df.lon, df.lat))
gdf.crs = {'init':'epsg:4326'}
return df, gdf
geolocator = GoogleV3(api_key=api_key)
def reverse_geocode(row):
location = geolocator.reverse([row.lat, row.lon],exactly_one=True)
if location:
address = location.address #use most specific address, see https://developers.google.com/maps/documentation/geocoding/intro#reverse-example
else:
address = np.nan
return address
def mask_ocean(gdf):
world = gpd.read_file(
gpd.datasets.get_path('naturalearth_lowres')
)
gdf_land = gpd.sjoin(gdf, world, how="inner", op='intersects')
gdf_land = gdf_land.loc[gdf_land["continent"] != 'Antarctica']
print(gdf_land.shape)
return gdf_land
def export_address(gdf):
gdf["id"] = gdf_land.index
gdf["location_name"] = "test"
df = gdf[["id","location_name","address"]]
datetime_string = datetime.datetime.now().isoformat()
filename = "export_address_{}_{}_V01.csv".format(desired_sample_size,datetime_string)
df = df[0:desired_sample_size]
df.to_csv(filename,index=False,encoding="UTF-8")
print("addresses exported to: {}".format(filename))
return df
def export_coords(gdf):
gdf["id"] = gdf_land.index
gdf["location_name"] = "test"
df = gdf[["id","location_name","lat","lon"]]
datetime_string = datetime.datetime.now().isoformat()
filename = "export_coords_{}_{}_V01.csv".format(desired_sample_size,datetime_string)
df = df[0:desired_sample_size]
df.to_csv(filename,index=False,encoding="UTF-8")
print("coords exported to: {}".format(filename))
return df
df, gdf = get_random_dataframe(sample_size)
gdf_land = mask_ocean(gdf)
gdf_land["address"]= gdf_land.apply(reverse_geocode,axis=1)
df_address = export_address(gdf_land)
df_coords = export_coords(gdf_land)
world = gpd.read_file(
gpd.datasets.get_path('naturalearth_lowres')
)
ax = plt.subplot(1, 1, 1)
ax = world.plot(ax=ax)
ax = gdf_land.plot(ax=ax,color="red", alpha=0.5)
###Output
_____no_output_____ |
notebooks/2018-10-29-many-experiments-multistep-Q2-estimates.ipynb | ###Markdown
Methods for improving Q2 estiamtesWe have seen that the multiples step training procedure introduces significant errors in the upper atmosphere. These result from the scheme trying to undo the errors it makes in early time steps. I believe this is mainly a result of the humidity in the upper atmosphere being forced to zero Weight first prediction step moreThis helps somewhat but there are still large errors in the upper atmosphere. **All of the following techniques include this** Multiply negative humidity by 100 in the loss functionThis appears to introduce positive bias everywhere, but it is still helpful. Humidity loss is $q^{1/4}$Changing the humidity loss function to $(q^{1/4}-q_{true}^{.25}$ does reduce errors in the upper atmosphere, at the cost of errors in the lower atmosphere. Also, this solution blows up. Predict $q_T f(x)$Computing the product rather than the sum helps ensure that there are fewer negative moisture points. Unforunately, the strong drying at the tropopause that we see in many other cases is also present. Perhaps, I should also compute FQT using log. Adding random noise after every time stepThis can be viewed as a form of regularization. For now, I am adding normal random noise with a standard deviation of 5 [units]/day to both SLI and QT at all heights.The loss function is significantly worse when using this form of regularization, which is what we want I think.Ultimately, even with random noise the solutions seem to converge to the strong drying at the tropopause. Increase the noise to 10 Random noise + q * fLet's try adding random noise along with the Q * F prediction.I think this is best approach I have tried so far. Try with seq_length=10Now, that I have tried it with seq_length=3, let's try it with seq_length=10.With longer prediction interval this method seems to underfit dramatically. Same as bove but smaller noise.RunID=129Make the noise 1.0...I can't make this stable for whatever reason. No decay with time. seq_length=10, noise size=1, no decayVery strong bias in upper atmos.. **no good** Add penalty to loss functionAlso penalize the error in Q2 made between the schemes. There are a couple ways to do this:1. penalize the Q2 computed for the full predicted time series $\tilde{x}$ (this approach doesn't work well. No pictures unforunately! Different loss decay schedule (Runid = 132)$loss = \sum_i c_i loss_i $. $c = [10.0, r^i,...]$. The first point has ten times the loss of the next point, and then it decays at a constant rate.
###Code
from src.sacred import get_run
get_run(132)['config']
###Output
_____no_output_____
###Markdown
This was run with a sequence length of 10. Let's see how a single column model simulation does.
###Code
import torch
from uwnet.columns import single_column_simulation
ds = xr.open_dataset("../data/processed/training.nc")
def scm_plot(path):
model = torch.load(path)
location = ds.isel(x=slice(0,1), y=slice(32,33))
scm_data = single_column_simulation(model, location, interval=(0, 190))
# scm_data.QT.squeeze().T.plot.contourf()
return scm_data, model
def water_budget_plots(path):
scm_data, model = scm_plot(path)
merged_pred_data = location.rename({'SLI': 'SLIOBS', 'QT': 'QTOBS'}).merge(scm_data, join='inner')
output = model.call_with_xr(merged_pred_data)
plt.figure()
output.QT.plot(x='time')
plt.title("FQTNN from NN-predicted OBS")
output_truth = model.call_with_xr(location)
plt.figure()
output_truth.QT.plot(x='time')
plt.title("FQTNN from Truth")
plt.xlim([100, 125])
plt.figure()
(scm_data.QT * ds.layer_mass/1000).sum('z').plot()
(location.QT * ds.layer_mass/1000).sum('z').plot(label='observed')
plt.legend()
water_budget_plots("../models/132/4.pkl")
###Output
_____no_output_____
###Markdown
The problem is that there is a long term drift in the humidity in the single column simulation. Decay loss with straight r^kThis seems to worsen the results. The solutions oscillate between too much moistening and too much drying Epoch 5
###Code
water_budget_plots("../models/133/5.pkl")
###Output
_____no_output_____
###Markdown
Epoch 6
###Code
water_budget_plots("../models/133/6.pkl")
###Output
_____no_output_____
###Markdown
Epoch 8
###Code
water_budget_plots("../models/133/8.pkl")
###Output
_____no_output_____ |
10-KubernetesAPI.ipynb | ###Markdown
Kubernetes API
###Code
import pandas as pd
from kubernetes import client, config
config.load_incluster_config()
pod_data = []
v1 = client.CoreV1Api()
pod_list = v1.list_namespaced_pod("data-lab")
for pod in pod_list.items:
pod_data.append([
pod.metadata.name,
pod.status.phase,
pod.metadata.creation_timestamp])
pd.DataFrame(pod_data, columns=["name", "Status", "Created"])
###Output
_____no_output_____ |
Python/timeseries/.ipynb_checkpoints/plot_timeseries_bouldertemp-checkpoint.ipynb | ###Markdown
Timeseries - Boulder temperature measurements
###Code
# By line: RRB 2020-07-28
# Script aims to:
# - Load multiple txt file
# - Calculate month averages from high time resolution
# - Plot timeseries from a plot function
# - Seasonal cycles
# - Diurnal cycles
# - Overplot years
###Output
_____no_output_____
###Markdown
Load python packages
###Code
import pandas as pd
from pandas.tseries.offsets import DateOffset
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import griddata
import datetime
from pathlib import Path # System agnostic paths
###Output
_____no_output_____
###Markdown
Create reusable functions
###Code
# Load measurement file and perform monthly averages (files need to all have same structure)
def load_temp(filename,header_num,delimiter):
data = pd.read_csv(filename,header=header_num,low_memory=False,na_values=-999.0, delimiter=delimiter)
meas_date = pd.to_datetime(data['Date(dd:mm:yyyy)'],format='%d:%m:%Y')
meas_date.name = 'date'
meas_var = data['AOD_500nm']
meas_var.index = meas_date
meas_var_month = meas_var.resample('M',loffset=pd.Timedelta(-15, 'd')).mean()
return meas_var_month
# Re-usable plotting call
def ts_plot(time_arr,val_arr,color_choice,label_string):
plt.plot(time_arr, val_arr, '-ok', label=label_string,
color=color_choice,
markersize=8, linewidth=3,
markerfacecolor=color_choice,
markeredgecolor='grey',
markeredgewidth=1)
# Month averages for the whole timeset
def cal_clim_month(pandas_DataArray):
clim_cyc = pd.DataFrame(np.zeros((12, 2)),columns=['Clim_avg','Clim_sd'])
num_obs = clim_cyc.shape[0]
#calculate climatological month values
for i in range(num_obs):
clim_cyc['Clim_avg'][i] = np.nanmean(pandas_DataArray[pandas_DataArray.index.month == i+1])
clim_cyc['Clim_sd'][i] = np.nanstd(pandas_DataArray[pandas_DataArray.index.month == i+1])
clim_cyc.index = np.arange(1, 13, step=1)
return clim_cyc
###Output
_____no_output_____
###Markdown
Load measurements and create month averages
###Code
# from https://psl.noaa.gov/boulder/#climo
result_dir = Path("../../data/")
temp_file = 'boulderdaily.complete.txt'
#boulder_data = load_temp(str(result_dir/temp_file),1,"\t")
df = pd.read_csv(str(result_dir/temp_file), header=1, delimiter = "\t")
boulder_data
###Output
_____no_output_____
###Markdown
Plot the value versus time.
###Code
plt.figure(figsize=(20,8))
ax = plt.axes()
ts_plot(boulder_aod.index,boulder_aod,'blue','Boulder AOD')
ts_plot(tblmnt_aod.index,tblmnt_aod,'red','Table Mountain AOD')
ts_plot(cvalla_aod.index,cvalla_aod,'green','NEON:Cvalla AOD')
ts_plot(cper_aod.index,cper_aod,'grey','NEON:CPER AOD')
ts_plot(ster_aod.index,ster_aod,'black','NEON:Sterling AOD')
ts_plot(rmnp_aod.index,rmnp_aod,'purple','NEON:RMNP AOD')
# axes format
plt.xticks(fontsize=18)
ax.set_ylim(0, 0.4)
plt.yticks(np.arange(0, 0.45, step=0.05), fontsize=18)
# adjust border
ax.spines["left"].set_linewidth(2.5)
ax.spines["bottom"].set_linewidth(2.5)
ax.spines["right"].set_visible(False)
ax.spines["top"].set_visible(False)
#titles
plt.title('AOD',fontsize=24)
plt.xlabel('CO (ppb)',fontsize=18)
plt.ylabel('AOD 500 nm',fontsize=18)
# legend
plt.legend(bbox_to_anchor=(0.28, 0.78),loc='lower right')
plt.show()
###Output
_____no_output_____
###Markdown
Seasonal Cycles
###Code
#boulder_clim = cal_clim_month(boulder_aod['2001':'2011'])
boulder_clim = cal_clim_month(boulder_aod['2012':'2016'])
tblmnt_clim = cal_clim_month(tblmnt_aod['2012':'2016'])
cvalla_clim = cal_clim_month(cvalla_aod)
cper_clim = cal_clim_month(cper_aod)
ster_clim = cal_clim_month(ster_aod)
rmnp_clim = cal_clim_month(rmnp_aod)
plt.figure(figsize=(20,8))
ax = plt.axes()
ts_plot(boulder_clim.index,boulder_clim['Clim_avg'],'blue','Boulder AOD')
plt.fill_between(boulder_clim.index,boulder_clim['Clim_avg'] - boulder_clim['Clim_sd'],
boulder_clim['Clim_avg'] + boulder_clim['Clim_sd'],
color='blue', alpha=0.1)
ts_plot(tblmnt_clim.index,tblmnt_clim['Clim_avg'],'red','Table Mountain AOD')
plt.fill_between(tblmnt_clim.index,tblmnt_clim['Clim_avg'] - tblmnt_clim['Clim_sd'],
tblmnt_clim['Clim_avg'] + tblmnt_clim['Clim_sd'],
color='red', alpha=0.1)
ts_plot(cvalla_clim.index,cvalla_clim['Clim_avg'],'green','NEON:Cvalla AOD')
plt.fill_between(cvalla_clim.index,cvalla_clim['Clim_avg'] - cvalla_clim['Clim_sd'],
cvalla_clim['Clim_avg'] + cvalla_clim['Clim_sd'],
color='green', alpha=0.1)
ts_plot(cper_clim.index,cper_clim['Clim_avg'],'grey','NEON:CPER AOD')
plt.fill_between(cper_clim.index,cper_clim['Clim_avg'] - cper_clim['Clim_sd'],
cper_clim['Clim_avg'] + cper_clim['Clim_sd'],
color='grey', alpha=0.1)
ts_plot(ster_clim.index,ster_clim['Clim_avg'],'black','NEON:Sterling AOD')
plt.fill_between(ster_clim.index,ster_clim['Clim_avg'] - ster_clim['Clim_sd'],
ster_clim['Clim_avg'] + ster_clim['Clim_sd'],
color='black', alpha=0.1)
ts_plot(rmnp_clim.index,rmnp_clim['Clim_avg'],'purple','NEON:RMNP AOD')
plt.fill_between(rmnp_clim.index,rmnp_clim['Clim_avg'] - rmnp_clim['Clim_sd'],
rmnp_clim['Clim_avg'] + rmnp_clim['Clim_sd'],
color='purple', alpha=0.1)
# axes format
plt.xticks(fontsize=18)
ax.set_ylim(0, 0.25)
plt.yticks(np.arange(0, 0.30, step=0.05), fontsize=18)
# adjust border
ax.spines["left"].set_linewidth(2.5)
ax.spines["bottom"].set_linewidth(2.5)
ax.spines["right"].set_visible(False)
ax.spines["top"].set_visible(False)
#titles
plt.title('Seasonal AOD',fontsize=24)
plt.xlabel('CO (ppb)',fontsize=18)
plt.ylabel('AOD 500 nm',fontsize=18)
# legend
plt.legend(bbox_to_anchor=(0.28, 0.68),loc='lower right')
plt.show()
###Output
_____no_output_____
###Markdown
Overplot years
###Code
year_array = np.arange(2002, 2016, step=1)
year_str = year_array.astype(str)
num_year = year_array.shape[0]
clim_cyc = pd.DataFrame(np.zeros((12, num_year)),columns=year_array.astype(str))
for y in year_array:
# note: range of data needs to fully cover the year of selection (even if nan)
clim_cyc[str(y)][:] = boulder_aod[str(y)]
#clim_cyc
month_names = ['Jan','Feb','Mar','Apr','May','Jun',
'Jul','Aug','Sep','Oct','Nov','Dec']
plt.figure(figsize=(20,8))
ax = plt.axes()
# Different colors for different lines
# colormaps at https://matplotlib.org/3.1.1/gallery/color/colormap_reference.html
colormap = plt.cm.viridis
colors = [colormap(i) for i in np.linspace(0, 1,num_year)]
for y in year_array:
plt.plot(month_names, clim_cyc[str(y)], '-ok', label=year_str[y-2002],
color=colors[y-2002], markerfacecolor=colors[y-2002], markersize=8,
linewidth=3, markeredgecolor='grey',
markeredgewidth=1)
# axes format
plt.xticks(fontsize=18)
ax.set_ylim(0, 0.35)
plt.yticks(np.arange(0, 0.40, step=0.05), fontsize=18)
# adjust border
ax.spines["left"].set_linewidth(2.5)
ax.spines["bottom"].set_linewidth(2.5)
ax.spines["right"].set_visible(False)
ax.spines["top"].set_visible(False)
#titles
plt.title('Month Average AOD',fontsize=24)
plt.xlabel('CO (ppb)',fontsize=18)
plt.ylabel('AOD 500 nm',fontsize=18)
# legend
plt.legend(bbox_to_anchor=(0.18, 0.48),loc='lower right')
plt.show()
###Output
_____no_output_____ |
notebooks/MScThesis/qyolo_finn.ipynb | ###Markdown
Imports
###Code
import os
import shutil
import numpy as np
import torch
from qYOLO.qyolo import QTinyYOLOv2, YOLOout, readAnchors
###Output
_____no_output_____
###Markdown
Import Network
###Code
img_dir = "./../../Dataset/images"
lbl_dir = "./../../Dataset/labels"
weight_bit_width = 4
act_bit_width = 4
n_anchors = 5
n_epochs = 10
batch_size = 1
# anchors = readAnchors(f'./../../train_out/5_anchors_first_500.txt')
# print(anchors)
net = QTinyYOLOv2(n_anchors, weight_bit_width, act_bit_width)
net_path = f'./../../train_out/trained_net_W{weight_bit_width}A{act_bit_width}_a{n_anchors}.pth'
net.load_state_dict(torch.load(net_path))
###Output
_____no_output_____
###Markdown
FINN Imports
###Code
# FINN-Brevitas imports
# from brevitas.onnx import export_finn_onnx as exportONNX
from brevitas.onnx import export_brevitas_onnx as exportONNX
from brevitas.export.onnx.generic.manager import BrevitasONNXManager
# ONNX libraries
import onnx
import onnx.numpy_helper as nph
import onnxruntime as rt
# Network display methods - Netron
from finn.util.visualization import showInNetron
# FINN Network Preperation imports
from finn.core.modelwrapper import ModelWrapper
from finn.core.datatype import DataType
from qonnx.util.cleanup import cleanup_model
from finn.util.pytorch import ToTensor
from finn.transformation.qonnx.convert_qonnx_to_finn import ConvertQONNXtoFINN
from finn.transformation.merge_onnx_models import MergeONNXModels
from finn.transformation.streamline import Streamline
from finn.transformation.streamline.reorder import MoveScalarLinearPastInvariants, MakeMaxPoolNHWC, MoveTransposePastScalarMul, MoveMulPastDWConv
from finn.transformation.streamline.absorb import AbsorbTransposeIntoMultiThreshold, AbsorbConsecutiveTransposes, AbsorbSignBiasIntoMultiThreshold, AbsorbMulIntoMultiThreshold
from finn.transformation.general import ConvertDivToMul, RemoveUnusedTensors
from finn.transformation.lower_convs_to_matmul import LowerConvsToMatMul
from finn.transformation.infer_data_layouts import InferDataLayouts
from finn.transformation.make_input_chanlast import MakeInputChannelsLast
from finn.transformation.fpgadataflow.convert_to_hls_layers import InferThresholdingLayer, InferConvInpGen, InferChannelwiseLinearLayer, InferStreamingMaxPool, InferQuantizedStreamingFCLayer
from finn.transformation.fpgadataflow.create_dataflow_partition import CreateDataflowPartition
from finn.custom_op.registry import getCustomOp
# FINN build imports
from finn.builder.build_dataflow import DataflowBuildConfig, build_dataflow_cfg
import finn.builder.build_dataflow_config as build_cfg
from finn.transformation.fpgadataflow.make_deployment import DeployToPYNQ
###Output
_____no_output_____
###Markdown
Brevitas Export
###Code
net_onnx_path = f'./../../train_out/trained_net_W{weight_bit_width}A{act_bit_width}_a{n_anchors}.onnx'
showInNetron(net_onnx_path)
onnx_dir = f'./onnx_W{weight_bit_width}A{act_bit_width}_a{n_anchors}/'
os.makedirs(onnx_dir, exist_ok=True)
exportONNX(net, (1, 3, 640, 640), onnx_dir + "og_net.onnx")
# model = ModelWrapper(onnx_dir + "og_net.onnx")
model = ModelWrapper(net_onnx_path)
model = cleanup_model(model)
model = model.transform(ConvertQONNXtoFINN())
model.save(onnx_dir + "tidy_net.onnx")
showInNetron(onnx_dir + "tidy_net.onnx")
###Output
_____no_output_____
###Markdown
Networks Preperation Add Pre/Post-Processing
###Code
model = ModelWrapper(onnx_dir + "tidy_net.onnx")
# pre-processing
in_name = model.graph.input[0].name
in_shape = model.get_tensor_shape(in_name)
totensor = ToTensor()
exportONNX(totensor, in_shape, onnx_dir + "preproc_net.onnx")
pre_model = ModelWrapper(onnx_dir + "preproc_net.onnx")
model = model.transform(MergeONNXModels(pre_model))
in_name = model.graph.input[0].name
model.set_tensor_datatype(in_name, DataType["UINT8"])
# post-processing
# TODO - check if I can actually create the output layer
model = cleanup_model(model)
model = model.transform(ConvertQONNXtoFINN())
model.save(onnx_dir + "preproc_net.onnx")
showInNetron(onnx_dir + "preproc_net.onnx")
###Output
_____no_output_____
###Markdown
Streamline
###Code
model = ModelWrapper(onnx_dir + "preproc_net.onnx")
model = model.transform(MakeInputChannelsLast())
model = model.transform(MoveScalarLinearPastInvariants())
model = model.transform(Streamline())
model = model.transform(LowerConvsToMatMul())
model = model.transform(MakeMaxPoolNHWC())
model = model.transform(AbsorbTransposeIntoMultiThreshold())
model = model.transform(AbsorbConsecutiveTransposes())
model = model.transform(Streamline())
model = model.transform(InferDataLayouts())
model = model.transform(RemoveUnusedTensors())
model = cleanup_model(model)
model.save(onnx_dir + "streamline_net.onnx")
showInNetron(onnx_dir + "streamline_net.onnx")
###Output
_____no_output_____
###Markdown
Convert to HLS
###Code
model = ModelWrapper(onnx_dir + "streamline_net.onnx")
model = model.transform(InferConvInpGen())
model = model.transform(InferQuantizedStreamingFCLayer())
model = model.transform(InferStreamingMaxPool())
model = model.transform(InferThresholdingLayer())
model = cleanup_model(model)
model.save(onnx_dir + "hls_net.onnx")
showInNetron(onnx_dir + "hls_net.onnx")
###Output
_____no_output_____
###Markdown
Create Dataflow PartitionFailed to remove the final mul and transpose layer, so I just remove the during the partition
###Code
model = ModelWrapper(onnx_dir + "hls_net.onnx")
parent_model = model.transform(CreateDataflowPartition())
parent_model.save(onnx_dir + "parent_net.onnx")
showInNetron(onnx_dir + "parent_net.onnx")
parent_model = ModelWrapper(onnx_dir + "parent_net.onnx")
sdp_node = parent_model.get_nodes_by_op_type("StreamingDataflowPartition")[0]
sdp_node = getCustomOp(sdp_node)
model_filename = sdp_node.get_nodeattr("model")
model = ModelWrapper(model_filename)
model.rename_tensor(model.get_all_tensor_names()[-1], "global_out")
model = cleanup_model(model)
model.save(onnx_dir + "dataflow_net.onnx")
showInNetron(onnx_dir + "dataflow_net.onnx")
###Output
_____no_output_____
###Markdown
Folding
###Code
model = ModelWrapper(onnx_dir + "dataflow_net.onnx")
layers = model.get_finn_nodes()
names = model.get_all_tensor_names()
for i, layer in enumerate(layers):
temp_op = getCustomOp(layer)
print(f"CustomOp wrapper of {layer.name}:")
for item in temp_op.get_nodeattr_types():
print(f"{item}: {temp_op.get_nodeattr_types()[item]} = {temp_op.get_nodeattr(item)}")
print()
model = ModelWrapper(onnx_dir + "dataflow_net.onnx")
model.save(onnx_dir + "folded_net.onnx")
showInNetron(onnx_dir + "folded_net.onnx")
###Output
_____no_output_____
###Markdown
Hardware Build and Deployment Configs
###Code
auto_fifo_depths = False
board = "ZCU102"
fpga_part = "xczu9eg-ffvb1156-2-e"
clk_ns = 5.0
target_fps = 50
mvau_wwidth_max = 10000
default_mem_mode = 'constant' # 'constant' or 'decoupled'
###Output
_____no_output_____
###Markdown
Estimates
###Code
out_dir = onnx_dir + "hw_est_out"
#Delete previous run results if exist
if os.path.exists(out_dir):
shutil.rmtree(out_dir)
print("Previous run results deleted!")
cfg_estimates = DataflowBuildConfig(auto_fifo_depths= auto_fifo_depths,
board=board,
fpga_part=fpga_part,
mvau_wwidth_max=mvau_wwidth_max,
synth_clk_period_ns= clk_ns,
target_fps=target_fps,
output_dir=out_dir,
steps=build_cfg.estimate_only_dataflow_steps,
generate_outputs=[build_cfg.DataflowOutputType.ESTIMATE_REPORTS])
out_dir_fs = onnx_dir + "hw_est_fs_out"
#Delete previous run results if exist
if os.path.exists(out_dir_fs):
shutil.rmtree(out_dir_fs)
print("Previous run results deleted!")
cfg_estimates_fs = DataflowBuildConfig(auto_fifo_depths=auto_fifo_depths,
board=board,
fpga_part=fpga_part,
mvau_wwidth_max=mvau_wwidth_max,
synth_clk_period_ns= clk_ns,
target_fps=target_fps,
output_dir=out_dir_fs,
steps=build_cfg.estimate_only_dataflow_steps,
generate_outputs=[build_cfg.DataflowOutputType.ESTIMATE_REPORTS])
build_dataflow_cfg(onnx_dir + "folded_net.onnx", cfg_estimates)
build_dataflow_cfg(net_onnx_path, cfg_estimates_fs)
# model = model.transform(ZynqBuild(platform = "ZCU102", period_ns = 10))
# model.save(onnx_dir + "hw_net.onnx")
###Output
_____no_output_____
###Markdown
Network Performance Estimates
###Code
print("personal finn:")
! cat {out_dir}/report/estimate_network_performance.json
print("\n\n\nauto finn:")
! cat {out_dir_fs}/report/estimate_network_performance.json
###Output
_____no_output_____
###Markdown
Layer Resources Estimates
###Code
print("personal finn:")
! cat {out_dir}/report/estimate_layer_resources.json
print("\n\n\nauto finn:")
! cat {out_dir_fs}/report/estimate_layer_resources.json
###Output
_____no_output_____
###Markdown
Layer Cycles Estimates
###Code
print("personal finn:")
! cat {out_dir}/report/estimate_layer_cycles.json
print("\n\n\nauto finn:")
! cat {out_dir_fs}/report/estimate_layer_cycles.json
###Output
_____no_output_____
###Markdown
Auto Folding Configurations
###Code
print("personal finn:")
! cat {out_dir}/auto_folding_config.json
print("\n\n\nauto finn:")
! cat {out_dir_fs}/auto_folding_config.json
###Output
_____no_output_____
###Markdown
Build
###Code
# out_dir = onnx_dir + "hw_build_out"
# #Delete previous run results if exist
# if os.path.exists(out_dir):
# shutil.rmtree(out_dir)
# print("Previous run results deleted!")
# cfg_build = DataflowBuildConfig(auto_fifo_depths= auto_fifo_depths,
# board=board,
# fpga_part=fpga_part,
# mvau_wwidth_max=mvau_wwidth_max,
# synth_clk_period_ns= clk_ns,
# target_fps=target_fps,
# output_dir=out_dir,
# shell_flow_type=build_cfg.ShellFlowType.VIVADO_ZYNQ,
# generate_outputs=[build_cfg.DataflowOutputType.BITFILE,
# build_cfg.DataflowOutputType.PYNQ_DRIVER,
# build_cfg.DataflowOutputType.DEPLOYMENT_PACKAGE])
out_dir_fs = onnx_dir + "hw_fs_build_out"
#Delete previous run results if exist
if os.path.exists(out_dir_fs):
shutil.rmtree(out_dir_fs)
print("Previous run results deleted!")
cfg_build_fs = DataflowBuildConfig(auto_fifo_depths=auto_fifo_depths,
board=board,
fpga_part=fpga_part,
mvau_wwidth_max=mvau_wwidth_max,
synth_clk_period_ns= clk_ns,
target_fps=target_fps,
output_dir=out_dir_fs,
shell_flow_type=build_cfg.ShellFlowType.VIVADO_ZYNQ,
generate_outputs=[build_cfg.DataflowOutputType.BITFILE,
build_cfg.DataflowOutputType.PYNQ_DRIVER,
build_cfg.DataflowOutputType.DEPLOYMENT_PACKAGE])
# build_dataflow_cfg(onnx_dir + "folded_net.onnx", cfg_build)
build_dataflow_cfg(net_onnx_path, cfg_build_fs)
###Output
_____no_output_____
###Markdown
Deploy
###Code
ip = os.getenv("PYNQ_IP", "128.131.80.208")
username = os.getenv("PYNQ_USERNAME", "xilinx")
password = os.getenv("PYNQ_PASSWORD", "xilinx")
port = os.getenv("PYNQ_PORT", 22)
target_dir = os.getenv("PYNQ_TARGET_DIR", "/home/xilinx/zcu102")
options = "-o PreferredAuthentications=publickey -o PasswordAuthentication=no"
# model = ModelWrapper(f"./onnx/{lenets_names[net_n]}_hw.onnx")
# model = model.transform(DeployToPYNQ(ip, port, username, password, target_dir))
# model.save(f"./onnx/{lenets_names[net_n]}_pynq.onnx")
###Output
_____no_output_____ |
B_Submissions_Kopuru_competition/2021-06-22_submits_afterwards/Customer Impact analyses/workerbee05_HEX-Predict2019.ipynb | ###Markdown
HEX algorithm **Kopuru Vespa Velutina Competition****XGBoost model**Purpose: Predict the number of Nests in each of Biscay's 112 municipalities for the year 2020.Output: *(WaspBusters_20210609_batch_XGBy_48019prodigal.csv)*@authors:* [email protected]* [email protected]* [email protected]* [email protected] DEPRECATEDIt is not possible to use the model to predict the year 2019, as this year was already used in the gridSearchCV to tune the hyperparameters Libraries
###Code
# Base packages -----------------------------------
import numpy as np
import pandas as pd
# Visualization -----------------------------------
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (15, 10)
import seaborn as sns
plt.style.use("seaborn-notebook")
# Scaling data ------------------------------------
from sklearn import preprocessing
# Grid search -------------------------------------
from sklearn.model_selection import GridSearchCV
# Confusion matrix --------------------------------
from sklearn.metrics import classification_report
# XGBoost -----------------------------------------
from xgboost import XGBRegressor
from xgboost import plot_importance
###Output
_____no_output_____
###Markdown
Functions
###Code
# Function that checks if final Output is ready for submission or needs revision
def check_data(HEX):
def template_checker(HEX):
submission_df = (HEX["CODIGO MUNICIPIO"].astype(str) + HEX["NOMBRE MUNICIPIO"]).sort_values().reset_index(drop=True)
template_df = (template["CODIGO MUNICIPIO"].astype(str) + template["NOMBRE MUNICIPIO"]).sort_values().reset_index(drop=True)
check_df = pd.DataFrame({"submission_df":submission_df,"template_df":template_df})
check_df["check"] = check_df.submission_df == check_df.template_df
if (check_df.check == False).any():
pd.options.display.max_rows = 112
return check_df.loc[check_df.check == False,:]
else:
return "All Municipality Names and Codes to be submitted match the Template"
print("Submission form Shape is", HEX.shape)
print("Number of Municipalities is", HEX["CODIGO MUNICIPIO"].nunique())
print("The Total 2020 Nests' Prediction is", int(HEX["NIDOS 2020"].sum()))
assert HEX.shape == (112, 3), "Error: Shape is incorrect."
assert HEX["CODIGO MUNICIPIO"].nunique() == 112, "Error: Number of unique municipalities is correct."
return template_checker(HEX)
###Output
_____no_output_____
###Markdown
Get the data
###Code
QUEEN_train = pd.read_csv('./WBds03_QUEENtrainMONTHS.csv', sep=',')
QUEEN_predict = pd.read_csv('./WBds03_QUEENpredictMONTHS.csv', sep=',')
clustersMario = pd.read_csv("./WBds_CLUSTERSnests.csv")
template = pd.read_csv("../../../Input_open_data/ds01_PLANTILLA-RETO-AVISPAS-KOPURU.csv",sep=";", encoding="utf-8")
#QUEEN_predict.isnull().sum()
QUEEN_train.shape
QUEEN_predict.shape
###Output
_____no_output_____
###Markdown
Add in more Clusters (nest amount clusters)
###Code
QUEEN_train = pd.merge(QUEEN_train, clustersMario, how = 'left', on = ['municip_code', 'municip_name'])
QUEEN_predict = pd.merge(QUEEN_predict, clustersMario, how = 'left', on = ['municip_code', 'municip_name'])
QUEEN_train.fillna(4, inplace=True)
QUEEN_predict.fillna(4, inplace=True)
QUEEN_train.shape
QUEEN_predict.shape
QUEEN_predict.Cluster.value_counts()
###Output
_____no_output_____
###Markdown
Get hyperparameters with GridsearchCV iterating on 2018's features (i.e. 2019's nests) as the test yearAnd training the model with 2017's features (i.e. 2018's nests as labels)
###Code
# The target variable
hyper_y_train = QUEEN_train.loc[QUEEN_train.year_offset.isin([2017]), ['municip_code', 'year_offset', 'month', 'NESTS']]
hyper_y_train = hyper_y_train.sort_values(by=['year_offset', 'month', 'municip_code'], ascending=True)
hyper_y_train.set_index(['year_offset', 'month', 'municip_code'], inplace=True)
hyper_y_test = QUEEN_train.loc[QUEEN_train.year_offset.isin([2018]), ['municip_code', 'year_offset', 'month', 'NESTS']]
hyper_y_test = hyper_y_test.sort_values(by=['year_offset', 'month', 'municip_code'], ascending=True)
hyper_y_test.set_index(['year_offset', 'month', 'municip_code'], inplace=True)
# The features matrix
hyperXtrain = QUEEN_train.loc[QUEEN_train.year_offset.isin([2017]), :].drop(['municip_name', 'station_code', 'station_name', 'NESTS'], axis=1)
hyperXtrain = hyperXtrain.sort_values(by=['year_offset', 'month', 'municip_code'], ascending=True)
hyperXtrain.set_index(['year_offset', 'month', 'municip_code'], inplace=True)
hyperXtest = QUEEN_train.loc[QUEEN_train.year_offset.isin([2018]), :].drop(['municip_name', 'station_code', 'station_name', 'NESTS'], axis=1)
hyperXtest = hyperXtest.sort_values(by=['year_offset', 'month', 'municip_code'], ascending=True)
hyperXtest.set_index(['year_offset', 'month', 'municip_code'], inplace=True)
xgb1 = XGBRegressor(random_state=23)
parameters = {'nthread':[4], #when use hyperthread, xgboost may become slower
'objective':['reg:linear'],
'learning_rate': [.03, 0.05, .07], #so called `eta` value
'max_depth': [5, 6, 7],
'min_child_weight': [4],
'silent': [1],
'subsample': [0.7],
'colsample_bytree': [0.7],
'n_estimators': [500]}
xgb_grid = GridSearchCV(xgb1,
parameters,
cv = 3,
n_jobs = 5,
verbose=True)
xgb_grid.fit(hyperXtrain, hyper_y_train)
print(xgb_grid.best_score_)
print(xgb_grid.best_params_)
#y_xgb_grid = xgb_grid.best_estimator_.predict(hyperXtest)
###Output
_____no_output_____
###Markdown
Prediction time! 1. Choose the model class
###Code
XGBRegressor
###Output
_____no_output_____
###Markdown
2. Instantiate the model
###Code
xgb = xgb_grid.best_estimator_
###Output
_____no_output_____
###Markdown
3. Prepare Feature Matrixes and Target Vectors
###Code
# The target variable
y_train = QUEEN_train.loc[:, ['municip_code', 'year_offset', 'month', 'NESTS']]
#y_train = y_train.sort_values(by=['year_offset', 'month', 'municip_code'], ascending=True)
y_train.set_index(['year_offset', 'month', 'municip_code'], inplace=True)
y_predict = QUEEN_predict.loc[:, ['municip_code', 'year_offset', 'month', 'NESTS']]
#y_predict = y_predict.sort_values(by=['year_offset', 'month', 'municip_code'], ascending=True)
y_predict.set_index(['year_offset', 'month', 'municip_code'], inplace=True)
# The features matrix
X_train = QUEEN_train.QUEEN_train.drop(['municip_name', 'station_code', 'station_name', 'NESTS'], axis=1)
#X_train = X_train.sort_values(by=['year_offset', 'month', 'municip_code'], ascending=True)
X_train.set_index(['year_offset', 'month', 'municip_code'], inplace=True)
X_predict = QUEEN_train.drop(['municip_name', 'station_code', 'station_name', 'NESTS'], axis=1)
#X_predict = X_predict.sort_values(by=['year_offset', 'month', 'municip_code'], ascending=True)
X_predict.set_index(['year_offset', 'month', 'municip_code'], inplace=True)
X_train.shape
y_train.shape
X_predict.shape
# bear in mind this is not a real dataset! Just a placeholder because these labels are all zeroes, since only the competition organizers know this vector's real values
y_predict.shape
###Output
_____no_output_____
###Markdown
4. Fit the model to the training data sets Scale and get feature importance
###Code
X = X_train.copy()
y = y_train.copy()
scalators = X.columns
X[scalators] = preprocessing.minmax_scale(X[scalators])
# define the model
model_fi = XGBRegressor(random_state=23)
# fit the model
model_fi.fit(X, y)
# get importance
importance = model_fi.feature_importances_
# summarize feature importance
#for i,v in enumerate(importance):
# print('Feature: %0d, Score: %.5f' % (i,v))
# plot feature importance
plot_importance(model_fi, height=0.5, xlabel="F-Score", ylabel="Feature Importance", grid=False)
plt.show()
###Output
_____no_output_____
###Markdown
Now, do fit the model but only with the relevant features
###Code
X_train = X_train.loc[:, ['population', 'weath_humidity', 'food_fruit', 'weath_maxLevel', 'food_txakoli', 'weath_midLevel', 'weath_minLevel', 'colonies_amount', 'weath_maxWindM', 'weath_meanWindM', 'weath_accuRainfall', 'weath_10minRainfall', 'food_kiwi', 'food_apple', 'weath_days_rain1mm', 'weath_meanDayMaxWind', 'weath_meanTemp']]
X_predict = X_predict.loc[:, ['population', 'weath_humidity', 'food_fruit', 'weath_maxLevel', 'food_txakoli', 'weath_midLevel', 'weath_minLevel', 'colonies_amount', 'weath_maxWindM', 'weath_meanWindM', 'weath_accuRainfall', 'weath_10minRainfall', 'food_kiwi', 'food_apple', 'weath_days_rain1mm', 'weath_meanDayMaxWind', 'weath_meanTemp']]
xgb.fit(X_train, y_train)
###Output
[12:07:04] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.4.0/src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
[12:07:04] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.4.0/src/learner.cc:573:
Parameters: { "silent" } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
###Markdown
5. Predict the labels for new data
###Code
y_predict = xgb.predict(X_predict)
score_train = xgb.score(X_train, y_train)
print(f"Score on the training set: {score_train:.0%}")
y_predict.shape
QUEEN_predict['NESTS'] = y_predict
QUEEN_predict.NESTS.sum()
QUEEN_predict.NESTS[QUEEN_predict.NESTS < 0] = 0
QUEEN_predict.NESTS.sum()
# export the dataset with the monthly data for viz purposes:
#QUEEN_predict.to_csv('WBds05_2020prediction_monthly.csv', index=False)
###Output
_____no_output_____
###Markdown
Prepare the dataset for submission
###Code
HEX = QUEEN_predict.loc[:,['municip_code', 'municip_name', 'NESTS']].groupby(by=['municip_code', 'municip_name'], as_index=False).sum()
###Output
_____no_output_____
###Markdown
Adjust manually for Bilbao 48020 and generate the output
###Code
HEX.loc[HEX.municip_code.isin([48020]), 'NESTS'] = 0
HEX.loc[HEX.municip_code.isin([48022, 48071, 48088, 48074, 48051, 48020]), :]
HEX.columns = ["CODIGO MUNICIPIO", "NOMBRE MUNICIPIO", "NIDOS 2020"] # change column names to Spanish (Competition template)
check_data(HEX)
###Output
Submission form Shape is (112, 3)
Number of Municipalities is 112
The Total 2020 Nests' Prediction is 2900
###Markdown
Export dataset for submission
###Code
#HEX.to_csv('WaspBusters_20210609_136-mXGB-prodigal-GSCV-noSort-FI-no0s.csv', index=False)
###Output
_____no_output_____
###Markdown
VERSION Manual adjustments
###Code
HEX.columns = ['municip_code', 'municip_name', 'NESTS'] # change column names to Spanish (Competition template)
HEX.loc[HEX.municip_code.isin([48022, 48071, 48088, 48074, 48051]), 'NESTS'] = [0,0,1,0,1]
HEX.loc[HEX.municip_code.isin([48022, 48071, 48088, 48074, 48051, 48020]), :]
HEX.columns = ["CODIGO MUNICIPIO", "NOMBRE MUNICIPIO", "NIDOS 2020"] # change column names to Spanish (Competition template)
check_data(HEX)
###Output
Submission form Shape is (112, 3)
Number of Municipalities is 112
The Total 2020 Nests' Prediction is 2826
###Markdown
Export dataset for submission
###Code
#HEX.to_csv('WaspBusters_20210609_months_XGBoost.csv', index=False)
###Output
_____no_output_____
###Markdown
Verify winner(3rd place)
###Code
MSE_98 = pd.read_csv('./WaspBusters_20210609_135-mXGB-prodigal-GSCV-noSort-FI-0s.csv')
check_data(MSE_98)
delta = MSE_98["NIDOS 2020"] - HEX["NIDOS 2020"]
delta.sum()
HEX.equals(MSE_98)
###Output
_____no_output_____ |
study_roadmaps/4_image_classification_zoo/Classifier - Malarial Cell Identitfication.ipynb | ###Markdown
Table of contents Install Monk Using pretrained model for classifying infected and normal malarial cells Training a classifier from scratch Install Monk - git clone https://github.com/Tessellate-Imaging/monk_v1.git - cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt (Select the requirements file as per OS and CUDA version)
###Code
! git clone https://github.com/Tessellate-Imaging/monk_v1.git
# If using Colab install using the commands below
! cd monk_v1/installation/Misc && pip install -r requirements_colab.txt
# If using Kaggle uncomment the following command
#! cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt
# Select the requirements file as per OS and CUDA version when using a local system or cloud
#! cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
###Output
_____no_output_____
###Markdown
Used trained classifier for demo
###Code
# Monk
import os
import sys
sys.path.append("monk_v1/monk/");
# Download trained weights
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1zrpT8lEBvJkig49cn59QstpA9Ml2dUm9' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1zrpT8lEBvJkig49cn59QstpA9Ml2dUm9" -O cls_malarial_trained.zip && rm -rf /tmp/cookies.txt
! unzip -qq cls_malarial_trained.zip
ls workspace/Project-Malarial-Cell/
# Pytorch project
from pytorch_prototype import prototype
# Load project in inference mode
gtf = prototype(verbose=1);
gtf.Prototype("Project-Malarial-Cell", "Pytorch-Densenet121", eval_infer=True);
#Other trained models - uncomment
#gtf.Prototype("Project-Malarial-Cell", "Pytorch-Densenet161", eval_infer=True);
#gtf.Prototype("Project-Malarial-Cell", "Pytorch-Densenet169", eval_infer=True);
img_name = "workspace/test/infected.png"
predictions = gtf.Infer(img_name=img_name);
from IPython.display import Image
Image(filename=img_name)
img_name = "workspace/test/uninfected.png"
predictions = gtf.Infer(img_name=img_name);
from IPython.display import Image
Image(filename=img_name)
###Output
_____no_output_____
###Markdown
Training custom classifier from scratch Dataset - Credits: https://www.kaggle.com/iarunava/cell-images-for-detecting-malaria Download
###Code
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1mMEtGIK8UZNCrErXRJR-kutNTaN1zxjC' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1mMEtGIK8UZNCrErXRJR-kutNTaN1zxjC" -O malaria_cell.zip && rm -rf /tmp/cookies.txt
! unzip -qq malaria_cell.zip
###Output
_____no_output_____
###Markdown
Training
###Code
# Monk
import os
import sys
sys.path.append("monk_v1/monk/");
# Using mxnet-gluon backend
#from gluon_prototype import prototype
# For pytorch backend
from pytorch_prototype import prototype
# For Keras backend
#from keras_prototype import prototype
# Create Project and Experiment
gtf = prototype(verbose=1);
gtf.Prototype("Project-Malarial-Cell", "Pytorch-Densenet121");
os.listdir("malaria_cell")
gtf.Default(dataset_path="malaria_cell",
model_name="densenet121",
num_epochs=2);
###Output
_____no_output_____
###Markdown
How to change hyper parameters and models - Docs - https://github.com/Tessellate-Imaging/monk_v14 - Examples - https://github.com/Tessellate-Imaging/monk_v1/tree/master/study_roadmaps/1_getting_started_roadmap
###Code
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
###Output
_____no_output_____
###Markdown
Testing on the dataset for validating accuracy
###Code
# Import monk
import os
import sys
sys.path.append("monk_v1/monk/");
# Using mxnet-gluon backend
#from gluon_prototype import prototype
# For pytorch backend
from pytorch_prototype import prototype
# For Keras backend
#from keras_prototype import prototype
# Create Project and Experiment
gtf = prototype(verbose=1);
gtf.Prototype("Project-Malarial-Cell", "Pytorch-Densenet121", eval_infer=True);
###Output
_____no_output_____
###Markdown
Dataset
###Code
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1WHpd7M-E_EiXmdjOr48BfvlUtMRPV6PM' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1WHpd7M-E_EiXmdjOr48BfvlUtMRPV6PM" -O malaria_cell_val.zip && rm -rf /tmp/cookies.txt
! unzip -qq malaria_cell_val.zip
# Load dataset for validaion
gtf.Dataset_Params(dataset_path="malaria_cell_val");
gtf.Dataset();
# Run validation
accuracy, class_based_accuracy = gtf.Evaluate();
###Output
_____no_output_____
###Markdown
Table of contents Install Monk Using pretrained model for classifying infected and normal malarial cells Training a classifier from scratch Install Monk Using pip (Recommended) - colab (gpu) - All bakcends: `pip install -U monk-colab` - kaggle (gpu) - All backends: `pip install -U monk-kaggle` - cuda 10.2 - All backends: `pip install -U monk-cuda102` - Gluon bakcned: `pip install -U monk-gluon-cuda102` - Pytorch backend: `pip install -U monk-pytorch-cuda102` - Keras backend: `pip install -U monk-keras-cuda102` - cuda 10.1 - All backend: `pip install -U monk-cuda101` - Gluon bakcned: `pip install -U monk-gluon-cuda101` - Pytorch backend: `pip install -U monk-pytorch-cuda101` - Keras backend: `pip install -U monk-keras-cuda101` - cuda 10.0 - All backend: `pip install -U monk-cuda100` - Gluon bakcned: `pip install -U monk-gluon-cuda100` - Pytorch backend: `pip install -U monk-pytorch-cuda100` - Keras backend: `pip install -U monk-keras-cuda100` - cuda 9.2 - All backend: `pip install -U monk-cuda92` - Gluon bakcned: `pip install -U monk-gluon-cuda92` - Pytorch backend: `pip install -U monk-pytorch-cuda92` - Keras backend: `pip install -U monk-keras-cuda92` - cuda 9.0 - All backend: `pip install -U monk-cuda90` - Gluon bakcned: `pip install -U monk-gluon-cuda90` - Pytorch backend: `pip install -U monk-pytorch-cuda90` - Keras backend: `pip install -U monk-keras-cuda90` - cpu - All backend: `pip install -U monk-cpu` - Gluon bakcned: `pip install -U monk-gluon-cpu` - Pytorch backend: `pip install -U monk-pytorch-cpu` - Keras backend: `pip install -U monk-keras-cpu` Install Monk Manually (Not recommended) Step 1: Clone the library - git clone https://github.com/Tessellate-Imaging/monk_v1.git Step 2: Install requirements - Linux - Cuda 9.0 - `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt` - Cuda 9.2 - `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt` - Cuda 10.0 - `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt` - Cuda 10.1 - `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt` - Cuda 10.2 - `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt` - CPU (Non gpu system) - `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt` - Windows - Cuda 9.0 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt` - Cuda 9.2 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt` - Cuda 10.0 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt` - Cuda 10.1 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt` - Cuda 10.2 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt` - CPU (Non gpu system) - `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt` - Mac - CPU (Non gpu system) - `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt` - Misc - Colab (GPU) - `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt` - Kaggle (GPU) - `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt` Step 3: Add to system path (Required for every terminal or kernel run) - `import sys` - `sys.path.append("monk_v1/");` Used trained classifier for demo
###Code
#Using pytorch backend
# When installed using pip
from monk.pytorch_prototype import prototype
# When installed manually (Uncomment the following)
#import os
#import sys
#sys.path.append("monk_v1/");
#sys.path.append("monk_v1/monk/");
#from monk.pytorch_prototype import prototype
# Download trained weights
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1zrpT8lEBvJkig49cn59QstpA9Ml2dUm9' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1zrpT8lEBvJkig49cn59QstpA9Ml2dUm9" -O cls_malarial_trained.zip && rm -rf /tmp/cookies.txt
! unzip -qq cls_malarial_trained.zip
ls workspace/Project-Malarial-Cell/
# Load project in inference mode
gtf = prototype(verbose=1);
gtf.Prototype("Project-Malarial-Cell", "Pytorch-Densenet121", eval_infer=True);
#Other trained models - uncomment
#gtf.Prototype("Project-Malarial-Cell", "Pytorch-Densenet161", eval_infer=True);
#gtf.Prototype("Project-Malarial-Cell", "Pytorch-Densenet169", eval_infer=True);
img_name = "workspace/test/infected.png"
predictions = gtf.Infer(img_name=img_name);
from IPython.display import Image
Image(filename=img_name)
img_name = "workspace/test/uninfected.png"
predictions = gtf.Infer(img_name=img_name);
from IPython.display import Image
Image(filename=img_name)
###Output
Prediction
Image name: workspace/test/uninfected.png
Predicted class: Uninfected
Predicted score: 0.9962384104728699
###Markdown
Training custom classifier from scratch Dataset - Credits: https://www.kaggle.com/iarunava/cell-images-for-detecting-malaria Download
###Code
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1mMEtGIK8UZNCrErXRJR-kutNTaN1zxjC' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1mMEtGIK8UZNCrErXRJR-kutNTaN1zxjC" -O malaria_cell.zip && rm -rf /tmp/cookies.txt
! unzip -qq malaria_cell.zip
###Output
_____no_output_____
###Markdown
Training
###Code
# Using mxnet-gluon backend
#from monk.gluon_prototype import prototype
# For pytorch backend
from monk.pytorch_prototype import prototype
# For Keras backend
#from monk.keras_prototype import prototype
# Create Project and Experiment
gtf = prototype(verbose=1);
gtf.Prototype("Project-Malarial-Cell", "Pytorch-Densenet121");
os.listdir("malaria_cell")
gtf.Default(dataset_path="malaria_cell",
model_name="densenet121",
num_epochs=2);
###Output
_____no_output_____
###Markdown
How to change hyper parameters and models - Docs - https://github.com/Tessellate-Imaging/monk_v14 - Examples - https://github.com/Tessellate-Imaging/monk_v1/tree/master/study_roadmaps/1_getting_started_roadmap
###Code
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
###Output
_____no_output_____
###Markdown
Testing on the dataset for validating accuracy
###Code
# Using mxnet-gluon backend
#from monk.gluon_prototype import prototype
# For pytorch backend
from monk.pytorch_prototype import prototype
# For Keras backend
#from monk.keras_prototype import prototype
# Create Project and Experiment
gtf = prototype(verbose=1);
gtf.Prototype("Project-Malarial-Cell", "Pytorch-Densenet121", eval_infer=True);
###Output
_____no_output_____
###Markdown
Dataset
###Code
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1WHpd7M-E_EiXmdjOr48BfvlUtMRPV6PM' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1WHpd7M-E_EiXmdjOr48BfvlUtMRPV6PM" -O malaria_cell_val.zip && rm -rf /tmp/cookies.txt
! unzip -qq malaria_cell_val.zip
# Load dataset for validaion
gtf.Dataset_Params(dataset_path="malaria_cell_val");
gtf.Dataset();
# Run validation
accuracy, class_based_accuracy = gtf.Evaluate();
###Output
_____no_output_____ |
multi-class.ipynb | ###Markdown
Make split
###Code
# train_len=3500
# test_df=df[train_len:]
# df=df[:train_len]
df=good_df.copy()
df=df[df.Industry != 'Media']
df=df[df.Industry != 'Public Utilities']
df=df[df.Industry != 'Transportation']
df=df[df.Industry != 'Health Care']
min_count=df.groupby('Industry').count().min()[0]
traindf=pd.DataFrame()
testdf=pd.DataFrame()
for ind in np.unique(df['Industry']):
print(ind)
count=len(df[df.Industry==ind])
#print(count)
#if count>min_count:
new_index=(list(np.random.choice(df[df.Industry==ind].index,min_count,replace=False)))
print(len(df.loc[new_index]))
traindf=pd.concat((traindf,df.loc[new_index[:int(len(new_index)*0.8)]]),axis=0)
testdf=pd.concat((testdf,df.loc[new_index[int(len(new_index)*0.8):]]),axis=0)
old_index=np.arange(len(traindf))
print(len(old_index))
np.random.shuffle(old_index)
print(len(old_index))
traindf=traindf.iloc[old_index]
print(len(traindf))
old_index=np.arange(len(testdf))
print(len(old_index))
np.random.shuffle(old_index)
print(len(old_index))
testdf=testdf.iloc[old_index]
print(len(traindf))
train_x=[]
for vec in (traindf.feature):
train_x.append(list(vec))
test_x=[]
for vec in (testdf.feature):
test_x.append(list(vec))
train_x=np.array(train_x)
test_x=np.array(test_x)
train_y=traindf.Industry
test_y=testdf.Industry
print(test_x.shape)
print(test_y.shape)
print(train_x.shape)
print(train_y.shape)
ind_list=np.unique(df['Industry'])
ind_list
###Output
_____no_output_____
###Markdown
Multi_class
###Code
o_good_df=good_df
good_df=good_df[good_df.Industry!='Media']
good_df=good_df[good_df.Industry!='Transportation']
#good_df=good_df[good_df.Industry.isin(['Miscellaneous','Consumer Services','Finance'])]
ind_list=np.unique(good_df['Industry'])
itd={}
ind_list
# df=good_df
train_class=[]
test_class=[]
big_test_x=[]
big_train_x=[]
clfs=[]
n=0
deltas=[]
new_test=pd.DataFrame()
new_df=pd.DataFrame()
for ind in ind_list:
sub_ind=good_df[good_df.Industry==ind].copy()
#test_index=sub_ind.index[:int(len(sub_ind)*0.2)]
test_index=sub_ind.index[int(len(sub_ind)*0.3):int(len(sub_ind)*0.5)]
new_test=pd.concat((new_test,sub_ind.loc[test_index].copy()),axis=0)
new_df=pd.concat((new_df,sub_ind.drop(test_index).copy()),axis=0)
new_df=new_df.sort_values(by=['showup'],ascending=False)
new_test=new_test.sort_values(by=['showup'],ascending=False)
for ind in ind_list:
itd[ind]=n
df=new_df.copy()
df['istech']=(df['Industry']==ind)
tech_df=df[df.istech]
non_tech_df=df[-df.istech]
train_x=[]
train_y=[]
train_len=len(tech_df)
end_len=len(tech_df)
total=len(tech_df)
train_class+=[n]*train_len
for vec in (tech_df.feature):
train_x.append(list(vec))
for vec in (non_tech_df.iloc[:train_len].feature):
train_x.append(list(vec))
train_y=np.array([1]*train_len+[0]*train_len)
print(np.unique(non_tech_df.iloc[:train_len].Industry))
clf=LogisticRegression()
#clf =RandomForestClassifier(n_estimators=1000,max_depth=10,criterion='entropy',n_jobs=8,max_features=50)
#clf=SVC(kernel='poly',probability=True,degree=2,C=1e6,tol=1e-06)
clf.fit(train_x,train_y)
clfs.append(clf)
n+=1
pred_prob=clf.predict_proba(train_x)
fpr, tpr, thresholds = metrics.roc_curve(train_y, pred_prob[:,1], pos_label=1)
auc=metrics.auc(fpr, tpr)
accuracy=sum(np.argmax(pred_prob,1)==train_y)/len(train_y)
print("rf auc: ",auc)
print("rf accuracy: ",accuracy)
best=0
dd=0
acc=0
dd=np.mean(pred_prob[train_y==1,1])
# for delta in np.arange(0,0.9,0.01):
# mody_pred=pred_prob.copy()
# mody_pred[:,0]=mody_pred[:,0]+delta
# f_pred=np.argmax(mody_pred,1)
# if sum(f_pred==train_y)/len(f_pred)>= best:
# best=sum(f_pred==train_y)/len(f_pred)
# dd=delta
# recall=sum(f_pred[f_pred==train_y]==1)/sum(train_y==1)
deltas.append(dd)
# print('best accu ',best)
print('delta ',dd)
# print('recall',recall)
big_test_x=[]
for vec in new_test.feature:
big_test_x.append(vec)
big_test_x=np.array(big_test_x)
n=0
test_class=[]
for ind in new_test.Industry:
test_class.append(np.where(ind_list == ind)[0][0])
n=0
test_class=np.array(test_class)
for clf in clfs:
tt=big_test_x[test_class==n,:]
ttn=big_test_x[test_class!=n,:]
#np.random.shuffle(ttn)
ttn=ttn[:len(tt),:]
#ttn=big_test_x[test_class!=n,:]
ttt=np.concatenate((tt,ttn),axis=0)
yy=len(tt)*[1]+len(ttn)*[0]
ppp=clf.predict_proba(ttt)
print('tt len',len(tt))
print('yy len',len(yy))
pred_y=np.argmax(ppp, axis=1)
#test=np.argmax(test_class, axis=1)
accu=sum(pred_y==np.array(yy))/len(yy)
print('acc : ',accu)
fpr, tpr, thresholds = metrics.roc_curve(yy, ppp[:,1], pos_label=1)
print('auc', metrics.auc(fpr, tpr))
n+=1
ppp
pred_prob_train=[]
pred_prob_test=[]
n=0
for clf in clfs:
# this_pred=clf.predict_proba(big_train_x)[:,0].reshape(-1,1)+deltas[n]
# if len(pred_prob_train)==0:
# pred_prob_train=this_pred
# else:
# pred_prob_train=np.concatenate((pred_prob_train,this_pred),axis=1)
this_pred=clf.predict_proba(big_test_x)[:,0].reshape(-1,1)
#this_pred=-this_pred
if len(pred_prob_test)==0:
pred_prob_test=this_pred
else:
pred_prob_test=np.concatenate((pred_prob_test,this_pred),axis=1)
n+=1
pred_y=np.argmax((pred_prob_test), axis=1)
#test=np.argmax(test_class, axis=1)
accu=sum(pred_y==np.array(test_class))/len(test_class)
print(accu)
svm:0.2325
rf: 0.1881
lg: 0.1233
svm 0.2165
ind_list
for i in range(len(pred_prob_test[0,:])):
p=pred_prob_test[:,i]
t=np.array(test_class)
print(sum(t==i))
# y = test_y+1
# pred = lg.predict_proba(test_x)[:,1]
fpr, tpr, thresholds = metrics.roc_curve(t, p, pos_label=i)
print(metrics.auc(fpr, tpr))
###Output
45
0.54743130227
103
0.382011007359
55
0.564741336012
34
0.0799032406696
66
0.387917637918
26
0.650452488688
67
0.70933901919
21
0.940235690236
###Markdown
Binary
###Code
for ind in ind_list:
df=good_df.copy()
#df=shuffle(df)
print("-"*50)
print("Industry: ", ind)
df['istech']=(df['Industry']==ind)
tech_df=df[df.istech].copy()
non_tech_df=df[-df.istech].copy()
train_x=[]
train_y=[]
#tech_train_index=np.random.choice(tech_df.index,int(len(tech_df.index)*0.8),replace=False)
tech_train_index=tech_df.index[:int(len(tech_df.index)*0.8)]
tech_test_index=tech_df.drop(tech_train_index).index
#nontech_train_index=np.random.choice(non_tech_df.index,len(tech_train_index),replace=False)
nontech_train_index=non_tech_df.index[:len(tech_train_index)]
nontech_test_index=non_tech_df.index[len(tech_train_index):len(tech_train_index)+len(tech_test_index)]
# print(len(tech_train_index))
# print(len(tech_test_index))
# print(len(nontech_train_index))
# print(len(nontech_test_index))
for vec in (tech_df.loc[tech_train_index].feature):
train_x.append(list(vec))
for vec in (non_tech_df.loc[nontech_train_index].feature):
train_x.append(list(vec))
train_y=np.array([1]*len(tech_train_index)+[0]*len(nontech_train_index))
# test_x=[]
# for vec in (tech_df.iloc[train_len:total].feature):
# test_x.append(list(vec))
# for vec in (non_tech_df.iloc[train_len:total].feature):
# test_x.append(list(vec))
# test_y=np.array([1]*len(tech_df.iloc[train_len:total].feature)+[0]*len(non_tech_df.iloc[train_len:total]))
test_x=[]
for vec in (tech_df.loc[tech_test_index].feature):
test_x.append(list(vec))
# s_df=non_tech_df.iloc[train_len:]
# s_df=shuffle(s_df)
# for vec in (s_df.iloc[:total-train_len].feature):
# test_x.append(list(vec))
for vec in (non_tech_df.loc[nontech_test_index].feature):
test_x.append(list(vec))
test_y=np.array([1]*len(tech_test_index)+[0]*len(nontech_test_index))
train_x=np.array(train_x)
test_x=np.array(test_x)
clf = RandomForestClassifier(n_estimators=1000,max_depth=10,criterion='entropy',n_jobs=8,max_features=50)
clf.fit(train_x,train_y)
D=np.mean(clf.predict_proba(train_x[train_y==1])[:,1])-np.mean(clf.predict_proba(train_x[train_y==1])[:,0])
print('D',D)
pred_prob=clf.predict_proba(test_x)
fpr, tpr, thresholds = metrics.roc_curve(test_y, pred_prob[:,1], pos_label=1)
auc=metrics.auc(fpr, tpr)
pred_prob[:,0]=pred_prob[:,0]+D
accuracy=sum(np.argmax(pred_prob,1)==test_y)/len(test_y)
print("rf auc: ",auc)
print("rf accuracy: ",accuracy)
# best=0
# dd=0
# acc=0
# for delta in np.arange(0,0.9,0.01):
# mody_pred=pred_prob.copy()
# mody_pred[:,0]=mody_pred[:,0]+delta
# f_pred=np.argmax(mody_pred,1)
# if sum(f_pred[f_pred==test_y]==1)/sum(f_pred==1)> best:
# best=sum(f_pred[f_pred==test_y]==1)/sum(f_pred==1)
# dd=delta
# recall=sum(f_pred[f_pred==test_y]==1)/sum(test_y==1)
# print(best)
# print(dd)
# print(recall)
best=0
dd=0
acc=0
for delta in np.arange(0,0.9,0.01):
mody_pred=pred_prob.copy()
mody_pred[:,0]=mody_pred[:,0]+delta
f_pred=np.argmax(mody_pred,1)
if sum(f_pred==test_y)/len(f_pred)> best:
best=sum(f_pred==test_y)/len(f_pred)
dd=delta
recall=sum(f_pred[f_pred==test_y]==1)/sum(test_y==1)
print('accu ',best)
print('delta ',dd)
print('best recall',recall)
print(sum(test_y))
print(len(test_y))
#break
# # pred_prob=lg.predict_proba(test_x)
# fpr, tpr, thresholds = metrics.roc_curve(test_y, pred_prob[:,1], pos_label=1)
# auc=metrics.auc(fpr, tpr)
# accuracy=sum(np.argmax(pred_prob,1)==test_y)/len(test_y)
# print("lg auc: ",auc)
# print("lg accuracy: ",accuracy)
clf.predict_proba(train_x[train_y==1])
probs=pd.DataFrame({'pred':pred_prob[:,0],'pred_1':pred_prob[:,1],'target':test_y})
max(probs[probs.target==0].pred_1)
probs
###Output
_____no_output_____ |
code/templates/Alma1_enz_deg_exps.ipynb | ###Markdown
Analysis of the data of loss of enzymatic activity during DMSP degradation assays by Alma1 (eukaryotic DMSP lyase)
###Code
# For numerical calculations
import numpy as np
import pandas as pd
import scipy as sp
import math
import git
from scipy.integrate import odeint
from numpy import arange
from scipy.integrate import odeint
import scipy.optimize
from scipy.optimize import leastsq
from math import exp
from collections import OrderedDict
from sklearn.linear_model import LinearRegression
pd.options.mode.chained_assignment = None
# Find home directory for repo
repo = git.Repo("./", search_parent_directories=True)
homedir = repo.working_dir
# Import plotting features
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.animation as animation
import seaborn as sns
# Set plot style
sns.set_style("ticks")
sns.set_palette("colorblind", color_codes=True)
sns.set_context("paper")
# Magic command to plot inline
%matplotlib inline
#To graph in SVG (high def)
%config InlineBackend.figure_format="svg"
###Output
_____no_output_____
###Markdown
We performed three experiments to confirm if Alma1 was losing activity over the course of the DMSP degradation assay. First experiment: degradation of DMSP by different enzyme concentrationsLet's start by loading the data:
###Code
# Load data
df = pd.read_csv(f'{homedir}/data/raw/enz_deg/Alma1_enz_deg_DMSP_100uM.csv')
###Output
_____no_output_____
###Markdown
First, we will get the real concentration in each sample, which is 10 times that which is in the table (which corresponds to a 1:10 dilution). Then, we will sort the values.
###Code
# Create real concentration column
df ['dmsp_um_real']= df ['dmsp_um'] * 10
#Sort values
df = df.sort_values(['enzyme_ul_ml_rxn', 'time_min'])
df.head()
###Output
_____no_output_____
###Markdown
Fit through least squares We will assume that the DMSP degradation reactions follow Michaelis-Menten kinetics, where:$$V = {V_\max [DMSP] \over K_M + [DMSP]}.$$The change in the concentration of DMSP over the course of the enzyme assay will decrease following this recursion:$$DMSP(t + \Delta t) = DMSP(t) - {V_\max DMSP(t) \over K_M + DMSP(t)}\Delta t.$$Where $DMSP(t + \Delta t)$ is the concentration of DMSP in the time t plus an increment $\Delta t$, DMSP(t) is the concentration of DMSP in the previous time unit t, $V_\max$ is the maximum velocity of the reaction, and $K_M$ is the Michaelis-Menten constant.the function substrate_kinetics will compute this recursion.We will make a fit to the Michaelis-Menten kinetics using a previously reported $K_M$ value.
###Code
def substrate_kinetics(so, vmax, km, time):
'''
Function that computes the substrate concentration over time by
numerically integrating the recursive equation
Parameters
----------
so : float.
Initial concentration of substrate
vmax : float.
Max speed of enzyme
km : float.
Michaelis-Menten constant of enzyme
time : array-like.
Time points where to evaluate function
'''
# Compute ∆t
delta_t = np.diff(time)[0]
# Initialize array to save substrate concentration
substrate = np.zeros(len(time))
# Modify first entry
substrate[0] = so
# Loop through time points
for i, t in enumerate(time[1:]):
substrate[i+1] = substrate[i] -\
vmax * substrate[i] / (km + substrate[i]) * delta_t
return substrate
###Output
_____no_output_____
###Markdown
We will now infer $V_{max}$ from the data using the substrate kinetic function:
###Code
#Define a function that computes the residuals to fit into scipy's least_squares.
def resid(vmax, so, km, time, time_exp, s_exp):
'''
Function that computes the residuals of the substrate concentration
according to the numerical integration of the dynamics.
Parameters
----------
vmax : float.
Max speed of enzyme
so : float.
Initial concentration of substrate
km : float.
Michaelis-Menten constant of enzyme
time : array-like.
Time points where to evaluate function
time_exp : array-like.
Time points where data was taken.
s_exp : array-like.
Experimental determination of substrate concentration
Returns
-------
residuals of experimental and theoretical values
'''
# Integrate substrate concentration
substrate = substrate_kinetics(so, vmax, km, time)
# Extract substrate at experimental time points
time_idx = np.isin(time, time_exp)
s_theory = substrate[time_idx]
return s_theory - s_exp
###Output
_____no_output_____
###Markdown
We will now utilize the previous function to calculate the $V_{max}$ for each concentration of enzyme:
###Code
#Group data by enzyme concentration
df_group = df.groupby(['enzyme_ul_ml_rxn'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
subs=[]
# Initialize empty dataframe to save fit results
df_fit_paramls = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group):
# Define time array
time = np.linspace(0, data.time_min.max(), 1000)
# Append experimental time points
time_exp = data.time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.dmsp_um_real.max()
# Extract experimental concentrations
s_exp = data.dmsp_um_real.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls = df_fit_paramls.append(series, ignore_index=True)
# Create a substrate concentration list
substrate = substrate_kinetics(so, vmax, km, time)
subs.append(time)
df_fit_paramls
###Output
_____no_output_____
###Markdown
Plot for the DMSP degradation by Alma1 and the Michaelis-Menten fit
###Code
# Define fig and axes
fig = plt.figure(figsize=(2.95, 1.95), dpi=192)
ax = fig.add_subplot(111)
# Define colors
colors = sns.color_palette('colorblind', n_colors=len(df_group))
# Define markers
markers = ['o', 's', 'd', '*','^']
# Loop through replicate
for i, (group, data) in enumerate(df_group):
# Extract initial concentration
so = data.dmsp_um_real.max()
# Define km
Km = 9000
# Extract fit vmax
vmax = df_fit_paramls[df_fit_paramls.enzyme_ul_ml_rxn == group].vmax.values
# Define time array
time = np.linspace(0, data.time_min.max(), 1000)
# Append experimental time points
time_exp = data.time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
# Plot experimental data
ax.scatter(data.time_min, data.dmsp_um_real, color=colors[i], marker=markers[i],
label=f"{group}X")
#ax.set_title('DddY. Vmax fitted.')
ax.set_ylabel(r'[DMSP] ($\mu$M)')
ax.set_xlabel(r'Time (min)')
#Set axes limits and tick marks
ax.set_xlim(-1,40)
ax.set_ylim(-5,100)
ax.set_xticks(range(0, 50, 10))
ax.set_yticks (range(0, 110, 20))
#Set legend position
ax.legend(bbox_to_anchor=(1, 0.9), title="[Alma1]")
#save figure
fig.savefig(f'{homedir}/figures/enz_deg/experiments/Alma1_enz_deg.pdf', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Second experiment: further addition of DMSP In this experiment, DMSP was added to 5 reaction vials at an initial concentration of 100 $\mu M$, and Alma1 was added at an initial concentration of 1.5X. After 38 minutes, further DMSP was added at different concentrations. Let's first load the data:
###Code
# load data
df_add = pd.read_csv(f'{homedir}/data/raw/enz_deg/Alma1_add_exps.csv')
df_add.head()
###Output
_____no_output_____
###Markdown
We will use the data from the first 38 minutes to determine the initial maximum velocity of the reaction, which is assumed to follow Michaelis-Menten kinetics.
###Code
# Filter data by experiment A (further DMSP addition)
df_exp_a = df_add[df_add['Experiment']=='A']
# Filter data by times less than 40 min
# This is to exclude the values before the addition of extra DMSP
df_exp_a_add_i = df_exp_a[df_exp_a['Type']=='Before']
#Group data by treatment
df_group1 = df_exp_a_add_i.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group1):
# Define time array
time = np.linspace(0, data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add = df_fit_paramls_add.append(series, ignore_index=True)
df_fit_paramls_add
###Output
_____no_output_____
###Markdown
The above dataframe shows the maximum velocity for each one of the 5 replicates of the Alma1 degradation assay. Now, we will calculate the maximum velocity after the addition of further DMSP.
###Code
#Utilize the function to get the residuals for Alma1
# Filter data by times more than 40 min
# This is to exclude the values after the addition of extra DMSP
df_exp_a_add_f = df_exp_a[df_exp_a['Type']=='After']
#Group data by treatment
df_group2 = df_exp_a_add_f.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add2 = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group2):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add2 = df_fit_paramls_add2.append(series, ignore_index=True)
df_fit_paramls_add2
###Output
_____no_output_____
###Markdown
We can clearly see that the maximum velocities before and after the addition of further DMSP are very different, and that the maximum velocities are also different for each one of the replicates after the addition of further DMSP, due to the fact that the concentration of DMSP added to them after the first 38 minutes of the experiments was different. Plot for the second experiment to test loss of enzyme activity in the DMSP degradation by Alma1
###Code
# Define fig and axes
fig = plt.figure(figsize=(2.95, 1.95), dpi=192)
ax = fig.add_subplot(111)
# Define colors
colors = sns.color_palette('colorblind', n_colors=len(df_group1))
# Define markers
markers = ['o', 's', 'd', '*','^']
#Group data by treatment to plot all data as scatter
df_group = df_exp_a.groupby(['Treatment'])
#Group data before the addition of DMSP by treatment to plot the fit on top of the data
df_group_i = df_exp_a_add_i.groupby(['Treatment'])
#Group data after the addition of DMSP by treatment to plot the fit on top of the data
df_group_f = df_exp_a_add_f.groupby(['Treatment'])
#Generate the fit for the data before the addition of DMSP
# Loop through replicate
for i, (group, data) in enumerate(df_group_i):
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract km
Km = 9000
# Extract fit vmax
vmax = df_fit_paramls_add[df_fit_paramls_add.enzyme_ul_ml_rxn == group].vmax.values
# Define time array
time = np.linspace(0, data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
#Generate the fit for the data after the addition of DMSP
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group_f):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
# Define labels for plots
labels = ('2X','1.5X','X','0.5X','0.25X')
#Loop through all data to plot them as scatter
for i, (group, data) in enumerate(df_group):
# Plot experimental data
ax.scatter(data.Time_min, data.DMSP_uM, color=colors[i], marker=markers[i],
label=labels[i])
#Set axes labels and tick marks
ax.set_ylabel(r'[DMSP] ($\mu$M)')
ax.set_xlabel(r'Time (min)')
ax.set_xlim(-1,80)
ax.set_xticks(range(0, 90, 20))
ax.set_yticks (range(0, 260, 60))
#Add vertical dotted line
ax.axvline(linewidth=1, x = 37, color='black', linestyle='--')
#Set legend and legend position
ax.legend(bbox_to_anchor=(1.05, -0.3), title="[DMSP] ($\mu$M)", ncol=3)
#Save figure
fig.savefig(f'{homedir}/figures/enz_deg/experiments/Alma1_enz_deg_further_DMSP.pdf', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Third experiment: further addition of Alma1 In this experiment, DMSP was added to 5 reaction vials at an initial concentration of 100 $\mu M$, and Alma1 was added at an initial concentration of 0.25X. After 38 minutes, further Alma1 was added at different concentrations. Let's first load the data:
###Code
# load data
df_add = pd.read_csv(f'{homedir}/data/raw/enz_deg/Alma1_add_exps.csv')
df_add.head()
# Filter data by experiment B (further addition of DMSP)
df_exp_b = df_add[df_add['Experiment']=='B']
# Filter data by times less than 40 min
# This is to exclude the values before the addition of extra enzyme
df_exp_b_add_i = df_exp_b[df_exp_b['Type']=='Before']
#Group data by treatment
df_group3 = df_exp_b_add_i.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add_b = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group3):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add_b = df_fit_paramls_add_b.append(series, ignore_index=True)
df_fit_paramls_add_b
###Output
_____no_output_____
###Markdown
The above dataframe shows the maximum velocity for each one of the 5 replicates of the Alma1 degradation assay. Now, we will calculate the maximum velocity after the addition of further Alma1.
###Code
# Filter data by times more than 40 min
# This is to exclude the values after the addition of extra enzyme
df_exp_b_add_f = df_exp_b[df_exp_b['Time_min']>36]
#Group data by treatment
df_group4 = df_exp_b_add_f.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add_b2 = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group4):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add_b2 = df_fit_paramls_add_b2.append(series, ignore_index=True)
df_fit_paramls_add_b2
###Output
_____no_output_____
###Markdown
We can clearly see that the maximum velocities before and after the addition of further Alma1 are very different, and that the maximum velocities are also different for each one of the replicates after the addition of further Alma1, due to the fact that the concentration of Alma1 added to them after the first 38 minutes of the experiments was different. Plot for the third experiment to test loss of enzyme activity in the DMSP degradation by Alma1
###Code
# Define fig and axes
fig = plt.figure(figsize=(2.95, 1.95), dpi=192)
ax = fig.add_subplot(111)
# Define colors
colors = sns.color_palette('colorblind', n_colors=len(df_group1))
# Define markers
markers = ['o', 's', 'd', '*','^']
#Group data by treatment to plot all data as scatter
df_groupb = df_exp_b.groupby(['Treatment'])
#Group data before the addition of enzyme by treatment to plot the fit on top of the data
df_group_ib = df_exp_b_add_i.groupby(['Treatment'])
#Group data after the addition of enzyme by treatment to plot the fit on top of the data
df_group_fb = df_exp_b_add_f.groupby(['Treatment'])
#Generate the fit for the data before the addition of enzyme
# Loop through replicate
for i, (group, data) in enumerate(df_group_ib):
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract km
Km = 9000
# Extract fit vmax
vmax = df_fit_paramls_add_b[df_fit_paramls_add_b.enzyme_ul_ml_rxn == group].vmax.values
# Define time array
time = np.linspace(0, data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
#Generate the fit for the data after the addition of enzyme
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group_fb):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
# Define labels for plots
labels = ('X','2X','3X','6X','10X')
#Loop through all data to plot them as scatter
for i, (group, data) in enumerate(df_groupb):
# Plot experimental data
ax.scatter(data.Time_min, data.DMSP_uM, color=colors[i], marker=markers[i],
label=labels[i])
#Set axes labels, limits and tick marks
ax.set_ylabel(r'[DMSP] ($\mu$M)')
ax.set_xlabel(r'Time (min)')
ax.set_xlim(-1,80)
ax.set_ylim(-5,100)
ax.set_xticks(range(0, 90, 10))
ax.set_yticks (range(0, 110, 20))
# Set vertical dashed line
ax.axvline(linewidth=1, x = 38, color='black', linestyle='--')
# Set legend position
ax.legend(bbox_to_anchor=(1, -0.3), title="[Alma1]", ncol=3)
#Save figure
fig.savefig(f'{homedir}/figures/enz_deg/experiments/Alma1_enz_deg_further_Alma1.pdf', bbox_inches='tight')
###Output
_____no_output_____ |
docs/tutorials/pure_pytorch/CORAL_cement.ipynb | ###Markdown
CORAL MLP for predicting cement strength (cement_strength) This tutorial explains how to train a deep neural network (here: multilayer perceptron) with the CORAL layer and loss function for ordinal regression. 0 -- Obtaining and preparing the cement_strength dataset We will be using the cement_strength dataset from [https://github.com/gagolews/ordinal_regression_data/blob/master/cement_strength.csv](https://github.com/gagolews/ordinal_regression_data/blob/master/cement_strength.csv).First, we are going to download and prepare the and save it as CSV files locally. This is a general procedure that is not specific to CORN.This dataset has 5 ordinal labels (1, 2, 3, 4, and 5). Note that CORN requires labels to be starting at 0, which is why we subtract "1" from the label column.
###Code
import pandas as pd
import numpy as np
data_df = pd.read_csv("https://raw.githubusercontent.com/gagolews/ordinal_regression_data/master/cement_strength.csv")
data_df["response"] = data_df["response"]-1 # labels should start at 0
data_labels = data_df["response"]
data_features = data_df.loc[:, ["V1", "V2", "V3", "V4", "V5", "V6", "V7", "V8"]]
print('Number of features:', data_features.shape[1])
print('Number of examples:', data_features.shape[0])
print('Labels:', np.unique(data_labels.values))
###Output
Number of features: 8
Number of examples: 998
Labels: [0 1 2 3 4]
###Markdown
Split into training and test data
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
data_features.values,
data_labels.values,
test_size=0.2,
random_state=1,
stratify=data_labels.values)
###Output
_____no_output_____
###Markdown
Standardize features
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
1 -- Setting up the dataset and dataloader In this section, we set up the data set and data loaders using PyTorch utilities. This is a general procedure that is not specific to CORAL.
###Code
import torch
##########################
### SETTINGS
##########################
# Hyperparameters
random_seed = 1
learning_rate = 0.05
num_epochs = 20
batch_size = 128
# Architecture
NUM_CLASSES = 10
# Other
DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('Training on', DEVICE)
from torch.utils.data import Dataset
class MyDataset(Dataset):
def __init__(self, feature_array, label_array, dtype=np.float32):
self.features = feature_array.astype(np.float32)
self.labels = label_array
def __getitem__(self, index):
inputs = self.features[index]
label = self.labels[index]
return inputs, label
def __len__(self):
return self.labels.shape[0]
import torch
from torch.utils.data import DataLoader
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = MyDataset(X_train_std, y_train)
test_dataset = MyDataset(X_test_std, y_test)
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True, # want to shuffle the dataset
num_workers=0) # number processes/CPUs to use
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0)
# Checking the dataset
for inputs, labels in train_loader:
print('Input batch dimensions:', inputs.shape)
print('Input label dimensions:', labels.shape)
break
###Output
Input batch dimensions: torch.Size([128, 8])
Input label dimensions: torch.Size([128])
###Markdown
2 - Equipping MLP with CORAL layer In this section, we are using the CoralLayer implemented in `coral_pytorch` to outfit a multilayer perceptron for ordinal regression. Note that the CORAL method only requires replacing the last (output) layer, which is typically a fully-connected layer, by the CORAL layer.Also, please use the `sigmoid` not softmax function (since the CORAL method uses a concept known as extended binary classification as described in the paper).
###Code
from coral_pytorch.layers import CoralLayer
class MLP(torch.nn.Module):
def __init__(self, in_features, num_classes, num_hidden_1=300, num_hidden_2=300):
super().__init__()
self.my_network = torch.nn.Sequential(
# 1st hidden layer
torch.nn.Linear(in_features, num_hidden_1, bias=False),
torch.nn.LeakyReLU(),
torch.nn.Dropout(0.2),
torch.nn.BatchNorm1d(num_hidden_1),
# 2nd hidden layer
torch.nn.Linear(num_hidden_1, num_hidden_2, bias=False),
torch.nn.LeakyReLU(),
torch.nn.Dropout(0.2),
torch.nn.BatchNorm1d(num_hidden_2),
)
### Specify CORAL layer
self.fc = CoralLayer(size_in=num_hidden_2, num_classes=num_classes)
###--------------------------------------------------------------------###
def forward(self, x):
x = self.my_network(x)
##### Use CORAL layer #####
logits = self.fc(x)
probas = torch.sigmoid(logits)
###--------------------------------------------------------------------###
return logits, probas
torch.manual_seed(random_seed)
model = MLP(in_features=8, num_classes=NUM_CLASSES)
model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
3 - Using the CORAL loss for model training During training, all you need to do is to 1) convert the integer class labels into the extended binary label format using the `levels_from_labelbatch` provided via `coral_pytorch`:```python levels = levels_from_labelbatch(class_labels, num_classes=NUM_CLASSES)```2) Apply the CORAL loss (also provided via `coral_pytorch`):```python loss = coral_loss(logits, levels)```
###Code
from coral_pytorch.dataset import levels_from_labelbatch
from coral_pytorch.losses import coral_loss
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, class_labels) in enumerate(train_loader):
##### Convert class labels for CORAL
levels = levels_from_labelbatch(class_labels,
num_classes=NUM_CLASSES)
###--------------------------------------------------------------------###
features = features.to(DEVICE)
levels = levels.to(DEVICE)
logits, probas = model(features)
#### CORAL loss
loss = coral_loss(logits, levels)
###--------------------------------------------------------------------###
optimizer.zero_grad()
loss.backward()
optimizer.step()
### LOGGING
if not batch_idx % 200:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Loss: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), loss))
from coral_pytorch.dataset import proba_to_label
def compute_mae_and_mse(model, data_loader, device):
with torch.no_grad():
mae, mse, acc, num_examples = 0., 0., 0., 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.float().to(device)
logits, probas = model(features)
predicted_labels = proba_to_label(probas).float()
num_examples += targets.size(0)
mae += torch.sum(torch.abs(predicted_labels - targets))
mse += torch.sum((predicted_labels - targets)**2)
mae = mae / num_examples
mse = mse / num_examples
return mae, mse
###Output
_____no_output_____
###Markdown
4 -- Evaluate modelFinally, after model training, we can evaluate the performance of the model. For example, via the mean absolute error and mean squared error measures.For this, we are going to use the `proba_to_label` utility function from `coral_pytorch` to convert the probabilities back to the orginal label.
###Code
from coral_pytorch.dataset import proba_to_label
def compute_mae_and_mse(model, data_loader, device):
with torch.no_grad():
mae, mse, acc, num_examples = 0., 0., 0., 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.float().to(device)
logits, probas = model(features)
predicted_labels = proba_to_label(probas).float()
num_examples += targets.size(0)
mae += torch.sum(torch.abs(predicted_labels - targets))
mse += torch.sum((predicted_labels - targets)**2)
mae = mae / num_examples
mse = mse / num_examples
return mae, mse
train_mae, train_mse = compute_mae_and_mse(model, train_loader, DEVICE)
test_mae, test_mse = compute_mae_and_mse(model, test_loader, DEVICE)
print(f'Mean absolute error (train/test): {train_mae:.2f} | {test_mae:.2f}')
print(f'Mean squared error (train/test): {train_mse:.2f} | {test_mse:.2f}')
###Output
Mean absolute error (train/test): 0.27 | 0.34
Mean squared error (train/test): 0.28 | 0.34
|
models/v3/.ipynb_checkpoints/Trigger_Function_Explanation-checkpoint.ipynb | ###Markdown
The Trigger Function Transforming Continuous Preferences into Discrete EventsThis notebook is a mathematical deep dive into the derivation of the Trigger Function used in Conviction Voting for the 1Hive use case.The role of the trigger function in the conviction voting algorithm is to determine if a sufficient amount of conviction has accumulated in support of a particular proposal, at which point it passes from being a candidate proposal to an active proposal. In the 1Hive use case for conviction, proposals map to precise quantities of resources $r$ requested from a communal resource pool $R$ (which is time varying $R_t$ but we will drop the subscript for ease of reading). Furthermore, there is a supply of governance tokens $S$ which are being used as part of the goverance process. In the implementation the quantity $S$ will be the effective supply which is the subset of the total Supply for the governance token in question. We assume a time varying supply $S_t$ and therefore we can interpret $S_t$ as the effective supply without loss of generality. From here forward, we will drop the subscript and refer to $S$ for ease of reading. The process of passing a proposal results in an allocation of $r$ funds as shown in the figure below.This diagram shows the trigger function logic, which depends on token supply $S$, total resources available $R$ and total conviction $y$ at time $t$, as well as the proposal's requested resources $r$, the maximum share of funds a proposal can take ($\beta$) and a tuning parameter for the trigger function ($\rho$). Essentially, this function controls the maximum amount of funds that can be requested by a proposal ($\beta$), using an equation resembling electron repulsion to ensure conviction increases massively beyond that point, so that no proposals may request more than $\beta$ share of total funds. Parameter Definition* $\alpha \in (0,1)$ is the parameter that determines the half life decay rate of conviction, as defined in the [Deriving Alpha notebook](https://nbviewer.jupyter.org/github/BlockScience/Aragon_Conviction_Voting/blob/master/models/v3/Deriving_Alpha.ipynb) and should be tuned according to a desired half life.* $\beta\in (0,1)$ is the max % of total funds that can be discharged by a single proposal, and is the asymptotic limit for the trigger function. It is impossible to discharge more than $\beta$ share of funds. * $\rho \in (0, \beta^2)$ is a the scale factor for the trigger function. Note that we require $0<\rho <\beta^2$ The trigger function is defined by: $y^*(r) = \frac{\rho S}{(1-\alpha)\left(\beta - \frac{r}{R}\right)^2 }$The geometric properties of this function with respect to the parameter choices are shown here:On this plot we can see that there is a maximum conviction that can be reached for a proposal, and also a maximum achievable funds released for a single proposal, which are important bounds for a community to establish for their funding pool. Note that by requiring that: $0<\rho <\beta^2$ the following holds $0<\frac{\rho}{\beta^2}<1$ and $0<\beta - \sqrt \rho <\beta <1$ Initializing Conditions for Plot Series
###Code
import numpy as np
import matplotlib.pyplot as plt
import inspect
import warnings
warnings.filterwarnings("ignore")
from cadCAD.configuration.utils import config_sim
from model.parts.utils import *
from model.parts.sys_params import *
initial_values
params
supply = initial_values['supply']
funds = initial_values['funds']
alpha = params['alpha'][0]
beta = params['beta'][0]
rho = params['rho'][0]
def trigger(requested, funds, supply, alpha, beta, rho):
'''
Function that determines threshold for proposals being accepted.
Refactored slightly from built in to be explicit for demo
'''
share = requested/funds
if share < beta:
threshold = rho*supply/(beta-share)**2 * 1/(1-alpha)
return threshold
else:
return np.inf
###Output
_____no_output_____
###Markdown
The actual trigger function used in the V3 simulation is below:
###Code
trigger_simulation = inspect.getsource(trigger_threshold)
print(trigger_simulation)
###Output
def trigger_threshold(requested, funds, supply, alpha, params):
'''
Description:
Function that determines threshold for proposals being accepted.
Parameters:
requested: float, funds requested
funds: float, funds
supply: float
alpha: float
params: dictionary of parameters as floats
Returns:
Threshold value
'''
share = requested/funds
if share < params['beta']:
threshold = params['rho']*supply/(params['beta']-share)**2 * 1/(1-alpha)
return threshold
else:
return np.inf
###Markdown
Simple derivationsWe can plug in some boundary conditions to determine our minimum required and maximum achievable conviction. We can also determine the maximum achievable funds a proposal is able to request, to understand the upper bounds of individual proposal funding.* min_required_conviction = $y^*(0) = \frac{\rho S}{(1-\alpha)\beta^2}$* max_achievable_conviction = $\frac{S}{1-\alpha}$* min_required_conviction_as_a_share_of_max = $\frac{\rho S}{(1-\alpha)\beta^2} \cdot \frac{1-\alpha}{S} = \frac{\rho}{\beta^2}$* To compute the max_achievable_request solve: $\frac{S}{1-\alpha} = \frac{\rho S}{(1-\alpha)\left(\beta-\\frac{r}{R}\right)^2}$* max_achievable_request = $r = (\beta -\sqrt\rho)F$
###Code
min_required_conviction = trigger(0, funds, supply, alpha, beta, rho)
print("min_required_conviction = "+str(min_required_conviction))
max_achievable_conviction = supply/(1-alpha)
print("max_achievable_conviction = "+str(max_achievable_conviction))
print("")
print("min_achievable_conviction_as_a_share_of_max_achievable_conviction = "+str(min_required_conviction/max_achievable_conviction))
print("")
max_request = beta*funds
max_achievable_request = (beta - np.sqrt(rho))*funds
print("max_achievable_request = "+str(max_achievable_request))
print("total_funds = "+str(funds))
print("")
print("max_achievable_request_as_a_share_of_funds = "+str(max_achievable_request/funds))
granularity = 100
requests = np.arange(0,.9*max_request, max_request/granularity)
requests_as_share_of_funds = requests/funds
conviction_required = np.array([trigger(r, funds, supply, alpha, beta, rho) for r in requests])
conviction_required_as_share_of_max = conviction_required/max_achievable_conviction
###Output
min_required_conviction = 6783.893932236272
max_achievable_conviction = 108542.30291578037
min_achievable_conviction_as_a_share_of_max_achievable_conviction = 0.06249999999999999
max_achievable_request = 730.0815000000001
total_funds = 4867.21
max_achievable_request_as_a_share_of_funds = 0.15000000000000002
###Markdown
Plot series 1: Examining the Shape of the Trigger Function Compared to Absolute Funds Requested These plots demonstrate the trigger function shape, showing how the amount of conviction required increases as amount of requested (absolute) funds increases. These plots are based on alpha, Supply and Funds as initialized above.
###Code
shape_of_trigger_in_absolute_terms(requests, conviction_required,max_request,
max_achievable_request,max_achievable_conviction,
min_required_conviction)
shape_of_trigger_in_absolute_terms(requests, conviction_required,max_request,
max_achievable_request,max_achievable_conviction,
min_required_conviction,log=True)
###Output
_____no_output_____
###Markdown
The above plots look at the shape of the trigger function on a linear and log scale, where you can see conviction required to pass a proposal increase with the absolute amount of funds requested. Plot series 2: Examining the Shape of the Trigger Function Compared to Relative Funds Requested These plots demonstrate the trigger function shape, showing how the amount of conviction required increases as the **proportion** of requested funds (relative to total funds) increases. These plots are based on alpha, Supply and Funds as initialized above.
###Code
shape_of_trigger_in_relative_terms(requests_as_share_of_funds, conviction_required_as_share_of_max
,max_request, funds, max_achievable_request,
max_achievable_conviction,
min_required_conviction)
shape_of_trigger_in_relative_terms(requests_as_share_of_funds, conviction_required_as_share_of_max
,max_request, funds, max_achievable_request,
max_achievable_conviction,
min_required_conviction,log=True)
###Output
_____no_output_____
###Markdown
The above plots look at the shape of the trigger function on a linear and log scale, where you can see conviction required to pass a proposal increase with the percentage of total funds requested. The two green lines intersect at persistent, unanimous support for a proposal, and the maximum that can be requested (in this case) is 15% of the total pool of funds. Plot series 3: Heat MapsThe next set of plots show that conviction required increases to a maximum with the proportion of total funds requested, capping out (in this case) at 15% of total funds available. Note that since we are using **relative** funds requested, these plots are invariant to alpha and effective supply. (In other words, since we are only looking at funds requested relative to the total funds, which are both affected by changes in alpha or effective supply, our conviction required for relative funds requested remains unchanged)
###Code
params
supply_sweep = trigger_sweep('effective_supply',trigger, params, supply)
alpha_sweep = trigger_sweep('alpha',trigger, params, supply)
trigger_grid(supply_sweep, alpha_sweep)
###Output
_____no_output_____
###Markdown
The Trigger Function Transforming Continuous Preferences into Discrete EventsThis notebook is a mathematical deep dive into the derivation of the Trigger Function used in Conviction Voting for the 1Hive use case.The role of the trigger function in the conviction voting algorithm is to determine if a sufficient amount of conviction has accumulated in support of a particular proposal, at which point it passes from being a candidate proposal to an active proposal. In the 1Hive use case for conviction, proposals map to precise quantities of resources $r$ requested from a communal resource pool $R$ (which is time varying $R_t$ but we will drop the subscript for ease of reading). Furthermore, there is a supply of governance tokens $S$ which are being used as part of the goverance process. In the implementation the quantity $S$ will be the effective supply which is the subset of the total Supply for the governance token in question. We assume a time varying supply $S_t$ and thereforewe can interpret $S_t$ as the effective supply without loss of generality. From here forward, we will drop the subscript and refer to $S$ for ease of reading. The process of passing a proposal results in an allocation of $r$ funds as shown in the figure below.The trigger function is characterized by a set of parameters in addition to the current state of the system: $R$ and $S$. Those parameters are $\alpha$, $\beta$ and $\rho$.* $\alpha \in (0,1)$ is the conviction rate parameter defined in the [Deriving Alpha notebook](https://nbviewer.jupyter.org/github/BlockScience/Aragon_Conviction_Voting/blob/master/models/v3/Deriving_Alpha.ipynb) and should be tuned according to a desired half life.* $\beta\in (0,1)$ is the asymptotic limit for trigger function. It is impossible to discharge more than $\beta$ share of funds. * $\rho \in (0, \beta^2)$ is a the scale factor for trigger function. Note that we require $0<\rho <\beta^2$ The trigger function is defined by: $y^*(r) = \frac{\rho S}{(1-\alpha)\left(\beta^2 - \frac{r}{R}\right) }$The geometric properties of this function with respect to the parameter choices are shown here:On this plot we can see that there is a maximum conviction that can be reached for a proposal, and also a maximum achievable funds released for a single proposal, which are important bounds for a community to establish for their funding pool.Note that by requiring that: $0<\rho <\beta^2$ the following holds $0<\frac{\rho}{\beta^2}<1$ and $0<\beta - \sqrt \rho <\beta <1$
###Code
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
from cadCAD.configuration.utils import config_sim
from model.parts.utils import *
from model.parts.sys_params import *
###Output
_____no_output_____
###Markdown
Reader Tutorial:Feel free to pull parameters out of the existing files or use this notebook to ovewrite them with your own choices to see how the plots are affected.
###Code
initial_values
params
supply = initial_values['supply']
funds = initial_values['funds']
alpha = params['alpha'][0]
beta = params['beta'][0]
rho = params['rho'][0]
def trigger(requested, funds, supply, alpha, beta, rho):
'''
Function that determines threshold for proposals being accepted.
Refactored slightly from built in to be explicit for demo
'''
share = requested/funds
if share < beta:
threshold = rho*supply/(beta-share)**2 * 1/(1-alpha)
return threshold
else:
return np.inf
###Output
_____no_output_____
###Markdown
Simple derivations:We can plug in some boundary conditions to determine our minimum required and maximum achievable conviction. We can also determine the maximum achievable funds a proposal is able to request, to understand the upper bounds of individual proposal funding.* min_required_conviction = $y^*(0) = \frac{\rho S}{(1-\alpha)\beta^2}$* max_achievable_conviction = $\frac{S}{1-\alpha}$* min_required_conviction_as_a_share_of_max = $\frac{\rho S}{(1-\alpha)\beta^2} \cdot \frac{1-\alpha}{S} = \frac{\rho}{\beta^2}$* To compute the max_achievable_request solve: $\frac{S}{1-\alpha} = \frac{\rho S}{(1-\alpha)\left(\beta-\frac{r}{R}\right)^2}$* max_achievable_request = $r = (\beta -\sqrt\rho)F$
###Code
min_required_conviction = trigger(0, funds, supply, alpha, beta, rho)
print("min_required_conviction ="+str(min_required_conviction))
max_achievable_conviction = supply/(1-alpha)
print("max_achievable_conviction ="+str(max_achievable_conviction))
print("")
print("min_achievable_conviction_as_a_share_of_max_achievable_conviction ="+str(min_required_conviction/max_achievable_conviction))
print("")
max_request = beta*funds
max_achievable_request = (beta - np.sqrt(rho))*funds
print("max_achievable_request ="+str(max_achievable_request))
print("total_funds ="+str(funds))
print("")
print("max_achievable_request_as_a_share_of_funds ="+str(max_achievable_request/funds))
granularity = 100
requests = np.arange(0,.9*max_request, max_request/granularity)
requests_as_share_of_funds = requests/funds
conviction_required = np.array([trigger(r, funds, supply, alpha, beta, rho) for r in requests])
conviction_required_as_share_of_max = conviction_required/max_achievable_conviction
###Output
min_required_conviction =6783.893932236272
max_achievable_conviction =108542.30291578037
min_achievable_conviction_as_a_share_of_max_achievable_conviction =0.06249999999999999
max_achievable_request =730.0815000000001
total_funds =4867.21
max_achievable_request_as_a_share_of_funds =0.15000000000000002
###Markdown
Plot series 1: "Absolute Terms" These plots demonstrate the trigger function based on alpha, Supply and Funds as above.
###Code
plt.plot(requests, conviction_required)
ax= plt.gca().axis()
plt.vlines(max_request, 0, ax[3], 'r', '--')
plt.vlines(max_achievable_request, 0, ax[3], 'g', '--')
plt.hlines(max_achievable_conviction, 0, max_request, 'g', '--')
plt.hlines(min_required_conviction, 0, max_request, 'k', '--')
plt.title("Sample Trigger Function in Absolute Terms; Linear Scale")
plt.xlabel("Resources Requested")
plt.ylabel("Conviction Required to Pass")
plt.figure(figsize=(10, 7))
plt.plot(requests, conviction_required)
ax= plt.gca().axis()
plt.vlines(max_request, 0, ax[3], 'r', '--')
plt.vlines(max_achievable_request, 0, ax[3], 'g', '--')
plt.hlines(max_achievable_conviction, 0, max_request, 'g', '--')
plt.hlines(min_required_conviction, 0, max_request, 'k', '--')
plt.title("Sample Trigger Function in Absolute Terms; Linear Scale")
plt.xlabel("Resources Requested")
plt.ylabel("Conviction Required to Pass")
plt.gca().set_ylim(0, max_achievable_conviction*(1.1))
plt.plot(requests, conviction_required)
ax= plt.gca().axis()
plt.vlines(max_request, 0, ax[3], 'r', '--')
plt.vlines(max_achievable_request, 0, ax[3], 'g', '--')
plt.hlines(max_achievable_conviction, 0, max_request, 'g', '--')
plt.hlines(min_required_conviction, 0, max_request, 'k', '--')
plt.title("Sample Trigger Function in Absolute Terms; Log Scale")
plt.xlabel("Resources Requested")
plt.ylabel("Conviction Required to Pass")
plt.gca().set_yscale('log')
plt.figure(figsize=(10, 7))
plt.plot(requests, conviction_required)
ax= plt.gca().axis()
plt.vlines(max_request, 0, ax[3], 'r', '--')
plt.vlines(max_achievable_request, 0, ax[3], 'g', '--')
plt.hlines(max_achievable_conviction, 0, max_request, 'g', '--')
plt.hlines(min_required_conviction, 0, max_request, 'k', '--')
plt.title("Sample Trigger Function in Absolute Terms; Log Scale")
plt.xlabel("Resources Requested")
plt.ylabel("Conviction Required to Pass")
plt.gca().set_yscale('log')
plt.gca().set_ylim(min_required_conviction/2, max_achievable_conviction*2)
###Output
_____no_output_____
###Markdown
Plot series 2: "Relative Terms" This set of plots looks at what happens when we knock out the dependence on alpha and supply, as well as treating requests as share of total funds.
###Code
plt.plot(requests_as_share_of_funds, conviction_required_as_share_of_max)
ax= plt.gca().axis()
plt.vlines(max_request/funds, 0, ax[3], 'r', '--')
plt.vlines(max_achievable_request/funds, 0, ax[3], 'g', '--')
plt.hlines(1, 0, max_request/funds, 'g', '--')
plt.hlines(min_required_conviction/max_achievable_conviction, 0, max_request/funds, 'k', '--')
plt.title("Sample Trigger Function in Relative Terms; Linear Scale")
plt.xlabel("Resources Requested as a share of Total Funds")
plt.ylabel("Conviction Required to Pass as share of max achievable")
plt.figure(figsize=(10, 7))
plt.plot(requests_as_share_of_funds, conviction_required_as_share_of_max)
ax= plt.gca().axis()
plt.vlines(max_request/funds, 0, ax[3], 'r', '--')
plt.vlines(max_achievable_request/funds, 0, ax[3], 'g', '--')
plt.hlines(1, 0, max_request/funds, 'g', '--')
plt.hlines(min_required_conviction/max_achievable_conviction, 0, max_request/funds, 'k', '--')
plt.title("Sample Trigger Function in Relative Terms; Linear Scale")
plt.xlabel("Resources Requested as a share of Total Funds")
plt.ylabel("Conviction Required to Pass as share of max achievable")
plt.gca().set_ylim(0, 1.1)
plt.plot(requests_as_share_of_funds, conviction_required_as_share_of_max)
ax= plt.gca().axis()
plt.vlines(max_request/funds, 0, ax[3], 'r', '--')
plt.vlines(max_achievable_request/funds, 0, ax[3], 'g', '--')
plt.hlines(1, 0, max_request/funds, 'g', '--')
plt.hlines(min_required_conviction/max_achievable_conviction, 0, max_request/funds, 'k', '--')
plt.title("Sample Trigger Function in Relative Terms; Log Scale")
plt.xlabel("Resources Requested as a share of Total Funds")
plt.ylabel("Conviction Required to Pass as share of max achievable")
plt.gca().set_yscale('log')
plt.figure(figsize=(10, 7))
plt.plot(requests_as_share_of_funds, conviction_required_as_share_of_max)
ax= plt.gca().axis()
plt.vlines(max_request/funds, 0, ax[3], 'r', '--')
plt.vlines(max_achievable_request/funds, 0, ax[3], 'g', '--')
plt.hlines(1, 0, max_request/funds, 'g', '--')
plt.hlines(min_required_conviction/max_achievable_conviction, 0, max_request/funds, 'k', '--')
plt.title("Sample Trigger Function in Relative Terms; Log Scale")
plt.xlabel("Resources Requested as a share of Total Funds")
plt.ylabel("Conviction Required to Pass as share of max achievable")
plt.gca().set_yscale('log')
plt.gca().set_ylim(min_required_conviction/max_achievable_conviction/2,2)
###Output
_____no_output_____
###Markdown
Plot series 3: Heat MapsThe next set of plots show the simultaneous variation of multiple parameters with a focus on alpha and supply.Note: that i am using params stored in the supporting files, this won't have changed even if you have edited the plots above
###Code
params
supply_sweep = trigger_sweep('effective_supply',trigger, params, supply)
alpha_sweep = trigger_sweep('alpha',trigger, params, supply)
trigger_grid(supply_sweep, alpha_sweep)
###Output
_____no_output_____ |
demo/2021-06-22/udpipe-lzh.ipynb | ###Markdown
[世界のUniversal Dependenciesと係り受け解析ツール群](http://kanji.zinbun.kyoto-u.ac.jp/~yasuoka/publications/2021-06-22.pdf) [古典中国語UDを用いた係り受け解析器の自作](https://koichiyasuoka.github.io/deplacy/demo/2021-06-22/) [UDPipe](https://ufal.mff.cuni.cz/udpipe/1/install)を用いる場合 必要なパッケージと訓練用train.conlluを準備
###Code
!test -f udpipe-1.2.0-bin.zip || curl -LO https://github.com/ufal/udpipe/releases/download/v1.2.0/udpipe-1.2.0-bin.zip
!test -d udpipe-1.2.0-bin || unzip udpipe-1.2.0-bin.zip
!test -f /usr/local/bin/udpipe || cp udpipe-1.2.0-bin/bin-linux64/udpipe /usr/local/bin/udpipe && chmod 755 /usr/local/bin/udpipe
!test -d UD_Classical_Chinese-Kyoto || git clone --depth=1 https://github.com/universaldependencies/UD_Classical_Chinese-Kyoto
!test -f train.conllu || ln -s UD_Classical_Chinese-Kyoto/lzh_kyoto-ud-train.conllu train.conllu
!pip install ufal.udpipe deplacy
###Output
_____no_output_____
###Markdown
my.udpipeを作成 (3時間程度)
###Code
!udpipe --train my.udpipe train.conllu
###Output
_____no_output_____
###Markdown
C++版で係り受け解析
###Code
!echo 不入虎穴不得虎子 | udpipe --tokenizer=presegmented --tag --parse my.udpipe
###Output
_____no_output_____
###Markdown
python版で係り受け解析
###Code
import ufal.udpipe
mdl = ufal.udpipe.Model.load("my.udpipe")
nlp = ufal.udpipe.Pipeline(mdl, "tokenizer=presegmented", "", "", "").process
doc = nlp("不入虎穴不得虎子")
print(doc)
import deplacy
deplacy.serve(doc,port=None)
###Output
_____no_output_____
###Markdown
[おまけ] my.udpipeを作成せず[UD-Kanbun](https://github.com/KoichiYasuoka/UD-Kanbun)で代用
###Code
!pip install udkanbun
import udkanbun, os
u = os.path.join(udkanbun.PACKAGE_DIR, "ud-kanbun.udpipe")
os.symlink(u, "my.udpipe")
###Output
_____no_output_____ |
docs/source/notebooks/marginalized_gaussian_mixture_model.ipynb | ###Markdown
Marginalized Gaussian Mixture ModelAuthor: [Austin Rochford](http://austinrochford.com)
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
SEED = 383561
np.random.seed(SEED) # from random.org, for reproducibility
###Output
_____no_output_____
###Markdown
Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.
###Code
N = 1000
W = np.array([0.35, 0.4, 0.25])
MU = np.array([0., 2., 5.])
SIGMA = np.array([0.5, 0.5, 1.])
component = np.random.choice(MU.size, size=N, p=W)
x = np.random.normal(MU[component], SIGMA[component], size=N)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True, lw=0);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
A natural parameterization of the Gaussian mixture model is as the [latent variable model](https://en.wikipedia.org/wiki/Latent_variable_model)$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\z\ |\ \boldsymbol{w} & \sim \textrm{Cat}(\boldsymbol{w}) \\x\ |\ z & \sim N(\mu_z, \tau^{-1}_i).\end{align*}$$An implementation of this parameterization in PyMC3 is available [here](gaussian_mixture_model.ipynb). A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.An alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\f(x\ |\ \boldsymbol{w}) & = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_i),\end{align*}$$where$$N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)$$is the probability density function of the normal distribution.Marginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the [Stan](http://mc-stan.org/) community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the [_Stan User's Guide and Reference Manual_](http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/Manuals/stan-reference-2.8.0.pdf).PyMC3 supports marginalized Gaussian mixture models through its `NormalMixture` class. (It also supports marginalized general mixture models through its `Mixture` class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.
###Code
with pm.Model() as model:
w = pm.Dirichlet('w', np.ones_like(W))
mu = pm.Normal('mu', 0., 10., shape=W.size)
tau = pm.Gamma('tau', 1., 1., shape=W.size)
x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)
with model:
trace = pm.sample(5000, n_init=10000, tune=1000, random_seed=SEED)[1000:]
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using advi...
Average ELBO = -6,663.8: 100%|██████████| 10000/10000 [00:06<00:00, 1582.50it/s]
Finished [100%]: Average ELBO = -6,582.7
100%|██████████| 5000/5000 [-1:54:12<00:00, -0.07s/it]
###Markdown
We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.
###Code
pm.traceplot(trace, varnames=['w', 'mu']);
pm.plot_posterior(trace, varnames=['w', 'mu']);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
We can also sample from the model's posterior predictive distribution, as follows.
###Code
with model:
ppc_trace = pm.sample_ppc(trace, 5000, random_seed=SEED)
###Output
100%|██████████| 5000/5000 [03:28<00:00, 23.93it/s]
###Markdown
We see that the posterior predictive samples have a distribution quite close to that of the observed data.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True,
histtype='step', lw=2,
label='Observed data');
ax.hist(ppc_trace['x_obs'], bins=30, normed=True,
histtype='step', lw=2,
label='Posterior predictive distribution');
ax.legend(loc=1);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
Marginalized Gaussian Mixture ModelAuthor: [Austin Rochford](http://austinrochford.com)
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
SEED = 383561
np.random.seed(SEED) # from random.org, for reproducibility
###Output
_____no_output_____
###Markdown
Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.
###Code
N = 1000
W = np.array([0.35, 0.4, 0.25])
MU = np.array([0., 2., 5.])
SIGMA = np.array([0.5, 0.5, 1.])
component = np.random.choice(MU.size, size=N, p=W)
x = np.random.normal(MU[component], SIGMA[component], size=N)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True, lw=0);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
A natural parameterization of the Gaussian mixture model is as the [latent variable model](https://en.wikipedia.org/wiki/Latent_variable_model)$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\z\ |\ \boldsymbol{w} & \sim \textrm{Cat}(\boldsymbol{w}) \\x\ |\ z & \sim N(\mu_z, \tau^{-1}_z).\end{align*}$$An implementation of this parameterization in PyMC3 is available [here](gaussian_mixture_model.ipynb). A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.An alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\f(x\ |\ \boldsymbol{w}) & = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_z),\end{align*}$$where$$N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)$$is the probability density function of the normal distribution.Marginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the [Stan](http://mc-stan.org/) community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the [_Stan User's Guide and Reference Manual_](http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/Manuals/stan-reference-2.8.0.pdf).PyMC3 supports marginalized Gaussian mixture models through its `NormalMixture` class. (It also supports marginalized general mixture models through its `Mixture` class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.
###Code
with pm.Model() as model:
w = pm.Dirichlet('w', np.ones_like(W))
mu = pm.Normal('mu', 0., 10., shape=W.size)
tau = pm.Gamma('tau', 1., 1., shape=W.size)
x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)
with model:
trace = pm.sample(5000, n_init=10000, tune=1000, random_seed=SEED)[1000:]
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using advi...
Average ELBO = -6,663.8: 100%|██████████| 10000/10000 [00:06<00:00, 1582.50it/s]
Finished [100%]: Average ELBO = -6,582.7
100%|██████████| 5000/5000 [-1:54:12<00:00, -0.07s/it]
###Markdown
We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.
###Code
pm.traceplot(trace, varnames=['w', 'mu']);
pm.plot_posterior(trace, varnames=['w', 'mu']);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
We can also sample from the model's posterior predictive distribution, as follows.
###Code
with model:
ppc_trace = pm.sample_ppc(trace, 5000, random_seed=SEED)
###Output
100%|██████████| 5000/5000 [03:28<00:00, 23.93it/s]
###Markdown
We see that the posterior predictive samples have a distribution quite close to that of the observed data.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True,
histtype='step', lw=2,
label='Observed data');
ax.hist(ppc_trace['x_obs'], bins=30, normed=True,
histtype='step', lw=2,
label='Posterior predictive distribution');
ax.legend(loc=1);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
Marginalized Gaussian Mixture ModelAuthor: [Austin Rochford](http://austinrochford.com)
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
SEED = 383561
np.random.seed(SEED) # from random.org, for reproducibility
###Output
_____no_output_____
###Markdown
Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.
###Code
N = 1000
W = np.array([0.35, 0.4, 0.25])
MU = np.array([0., 2., 5.])
SIGMA = np.array([0.5, 0.5, 1.])
component = np.random.choice(MU.size, size=N, p=W)
x = np.random.normal(MU[component], SIGMA[component], size=N)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True, lw=0);
###Output
/home/junpenglao/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
###Markdown
A natural parameterization of the Gaussian mixture model is as the [latent variable model](https://en.wikipedia.org/wiki/Latent_variable_model)$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\z\ |\ \boldsymbol{w} & \sim \textrm{Cat}(\boldsymbol{w}) \\x\ |\ z & \sim N(\mu_z, \tau^{-1}_i).\end{align*}$$An implementation of this parameterization in PyMC3 is available [here](gaussian_mixture_model.ipynb). A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.An alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\f(x\ |\ \boldsymbol{w}) & = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_i),\end{align*}$$where$$N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)$$is the probability density function of the normal distribution.Marginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the [Stan](http://mc-stan.org/) community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the [_Stan User's Guide and Reference Manual_](http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/Manuals/stan-reference-2.8.0.pdf).PyMC3 supports marginalized Gaussian mixture models through its `NormalMixture` class. (It also supports marginalized general mixture models through its `Mixture` class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.
###Code
with pm.Model() as model:
w = pm.Dirichlet('w', np.ones_like(W))
mu = pm.Normal('mu', 0., 10., shape=W.size)
tau = pm.Gamma('tau', 1., 1., shape=W.size)
x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)
with model:
trace = pm.sample(5000, n_init=10000, tune=1000, random_seed=SEED)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [tau, mu, w]
Sampling 2 chains: 100%|██████████| 12000/12000 [00:27<00:00, 432.06draws/s]
The gelman-rubin statistic is larger than 1.4 for some parameters. The sampler did not converge.
The estimated number of effective samples is smaller than 200 for some parameters.
###Markdown
We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.
###Code
pm.traceplot(trace, var_names=['w', 'mu']);
pm.plot_posterior(trace, var_names=['w', 'mu']);
###Output
_____no_output_____
###Markdown
We can also sample from the model's posterior predictive distribution, as follows.
###Code
with model:
ppc_trace = pm.sample_posterior_predictive(trace, 5000, random_seed=SEED)
###Output
100%|██████████| 5000/5000 [02:09<00:00, 38.54it/s]
###Markdown
We see that the posterior predictive samples have a distribution quite close to that of the observed data.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(ppc_trace['x_obs'], bins=30, density=True,
histtype='step', lw=2,
color=['.5'] * ppc_trace['x_obs'].shape[1],
alpha=.05,
label='Posterior predictive distribution');
ax.hist(x, bins=30, density=True,
histtype='step', lw=2,
label='Observed data');
ax.legend(loc=1);
###Output
_____no_output_____
###Markdown
Marginalized Gaussian Mixture ModelAuthor: [Austin Rochford](http://austinrochford.com)
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
SEED = 383561
np.random.seed(SEED) # from random.org, for reproducibility
###Output
_____no_output_____
###Markdown
Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.
###Code
N = 1000
W = np.array([0.35, 0.4, 0.25])
MU = np.array([0., 2., 5.])
SIGMA = np.array([0.5, 0.5, 1.])
component = np.random.choice(MU.size, size=N, p=W)
x = np.random.normal(MU[component], SIGMA[component], size=N)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True, lw=0);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
A natural parameterization of the Gaussian mixture model is as the [latent variable model](https://en.wikipedia.org/wiki/Latent_variable_model)$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\z\ |\ \boldsymbol{w} & \sim \textrm{Cat}(\boldsymbol{w}) \\x\ |\ z & \sim N(\mu_z, \tau^{-1}_i).\end{align*}$$An implementation of this parameterization in PyMC3 is available [here](gaussian_mixture_model.ipynb). A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.An alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\f(x\ |\ \boldsymbol{w}) & = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_i),\end{align*}$$where$$N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)$$is the probability density function of the normal distribution.Marginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the [Stan](http://mc-stan.org/) community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the [_Stan User's Guide and Reference Manual_](http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/Manuals/stan-reference-2.8.0.pdf).PyMC3 supports marginalized Gaussian mixture models through its `NormalMixture` class. (It also supports marginalized general mixture models through its `Mixture` class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.
###Code
with pm.Model() as model:
w = pm.Dirichlet('w', np.ones_like(W))
mu = pm.Normal('mu', 0., 10., shape=W.size)
tau = pm.Gamma('tau', 1., 1., shape=W.size)
x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)
with model:
trace = pm.sample(5000, n_init=10000, tune=1000, random_seed=SEED)[1000:]
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using advi...
Average ELBO = -6,663.8: 100%|██████████| 10000/10000 [00:06<00:00, 1582.50it/s]
Finished [100%]: Average ELBO = -6,582.7
100%|██████████| 5000/5000 [-1:54:12<00:00, -0.07s/it]
###Markdown
We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.
###Code
pm.traceplot(trace, varnames=['w', 'mu']);
pm.plot_posterior(trace, varnames=['w', 'mu']);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
We can also sample from the model's posterior predictive distribution, as follows.
###Code
with model:
ppc_trace = pm.sample_posterior_predictive(trace, 5000, random_seed=SEED)
###Output
100%|██████████| 5000/5000 [03:28<00:00, 23.93it/s]
###Markdown
We see that the posterior predictive samples have a distribution quite close to that of the observed data.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True,
histtype='step', lw=2,
label='Observed data');
ax.hist(ppc_trace['x_obs'], bins=30, normed=True,
histtype='step', lw=2,
label='Posterior predictive distribution');
ax.legend(loc=1);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
Marginalized Gaussian Mixture ModelAuthor: [Austin Rochford](http://austinrochford.com)
###Code
%matplotlib inline
import numpy as np
import pymc3 as pm
import seaborn as sns
from matplotlib import pyplot as plt
SEED = 383561
np.random.seed(SEED) # from random.org, for reproducibility
###Output
_____no_output_____
###Markdown
Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.
###Code
N = 1000
W = np.array([0.35, 0.4, 0.25])
MU = np.array([0., 2., 5.])
SIGMA = np.array([0.5, 0.5, 1.])
component = np.random.choice(MU.size, size=N, p=W)
x = np.random.normal(MU[component], SIGMA[component], size=N)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True, lw=0);
###Output
/home/junpenglao/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
###Markdown
A natural parameterization of the Gaussian mixture model is as the [latent variable model](https://en.wikipedia.org/wiki/Latent_variable_model)$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\z\ |\ \boldsymbol{w} & \sim \textrm{Cat}(\boldsymbol{w}) \\x\ |\ z & \sim N(\mu_z, \tau^{-1}_i).\end{align*}$$An implementation of this parameterization in PyMC3 is available [here](gaussian_mixture_model.ipynb). A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.An alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\f(x\ |\ \boldsymbol{w}) & = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_i),\end{align*}$$where$$N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)$$is the probability density function of the normal distribution.Marginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the [Stan](http://mc-stan.org/) community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the [_Stan User's Guide and Reference Manual_](http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/Manuals/stan-reference-2.8.0.pdf).PyMC3 supports marginalized Gaussian mixture models through its `NormalMixture` class. (It also supports marginalized general mixture models through its `Mixture` class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.
###Code
with pm.Model() as model:
w = pm.Dirichlet('w', np.ones_like(W))
mu = pm.Normal('mu', 0., 10., shape=W.size)
tau = pm.Gamma('tau', 1., 1., shape=W.size)
x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)
with model:
trace = pm.sample(5000, n_init=10000, tune=1000, random_seed=SEED)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [tau, mu, w]
Sampling 2 chains: 100%|██████████| 12000/12000 [00:27<00:00, 432.06draws/s]
The gelman-rubin statistic is larger than 1.4 for some parameters. The sampler did not converge.
The estimated number of effective samples is smaller than 200 for some parameters.
###Markdown
We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.
###Code
pm.traceplot(trace, var_names=['w', 'mu']);
pm.plot_posterior(trace, var_names=['w', 'mu']);
###Output
_____no_output_____
###Markdown
We can also sample from the model's posterior predictive distribution, as follows.
###Code
with model:
ppc_trace = pm.sample_posterior_predictive(trace, 5000, random_seed=SEED)
###Output
100%|██████████| 5000/5000 [02:09<00:00, 38.54it/s]
###Markdown
We see that the posterior predictive samples have a distribution quite close to that of the observed data.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(ppc_trace['x_obs'], bins=30, density=True,
histtype='step', lw=2,
color=['.5'] * ppc_trace['x_obs'].shape[1],
alpha=.05,
label='Posterior predictive distribution');
ax.hist(x, bins=30, density=True,
histtype='step', lw=2,
label='Observed data');
ax.legend(loc=1);
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
pymc3 3.8
arviz 0.8.3
numpy 1.17.5
last updated: Thu Jun 11 2020
CPython 3.8.2
IPython 7.11.0
watermark 2.0.2
###Markdown
Marginalized Gaussian Mixture ModelAuthor: [Austin Rochford](http://austinrochford.com)
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
SEED = 383561
np.random.seed(SEED) # from random.org, for reproducibility
###Output
_____no_output_____
###Markdown
Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.
###Code
N = 1000
W = np.array([0.35, 0.4, 0.25])
MU = np.array([0., 2., 5.])
SIGMA = np.array([0.5, 0.5, 1.])
component = np.random.choice(MU.size, size=N, p=W)
x = np.random.normal(MU[component], SIGMA[component], size=N)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True, lw=0);
###Output
/home/junpenglao/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
###Markdown
A natural parameterization of the Gaussian mixture model is as the [latent variable model](https://en.wikipedia.org/wiki/Latent_variable_model)$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\z\ |\ \boldsymbol{w} & \sim \textrm{Cat}(\boldsymbol{w}) \\x\ |\ z & \sim N(\mu_z, \tau^{-1}_i).\end{align*}$$An implementation of this parameterization in PyMC3 is available [here](gaussian_mixture_model.ipynb). A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.An alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\f(x\ |\ \boldsymbol{w}) & = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_i),\end{align*}$$where$$N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)$$is the probability density function of the normal distribution.Marginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the [Stan](http://mc-stan.org/) community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the [_Stan User's Guide and Reference Manual_](http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/Manuals/stan-reference-2.8.0.pdf).PyMC3 supports marginalized Gaussian mixture models through its `NormalMixture` class. (It also supports marginalized general mixture models through its `Mixture` class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.
###Code
with pm.Model() as model:
w = pm.Dirichlet('w', np.ones_like(W))
mu = pm.Normal('mu', 0., 10., shape=W.size)
tau = pm.Gamma('tau', 1., 1., shape=W.size)
x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)
with model:
trace = pm.sample(5000, n_init=10000, tune=1000, random_seed=SEED)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [tau, mu, w]
Sampling 2 chains: 100%|██████████| 12000/12000 [00:27<00:00, 432.06draws/s]
The gelman-rubin statistic is larger than 1.4 for some parameters. The sampler did not converge.
The estimated number of effective samples is smaller than 200 for some parameters.
###Markdown
We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.
###Code
pm.traceplot(trace, var_names=['w', 'mu']);
pm.plot_posterior(trace, var_names=['w', 'mu']);
###Output
_____no_output_____
###Markdown
We can also sample from the model's posterior predictive distribution, as follows.
###Code
with model:
ppc_trace = pm.sample_posterior_predictive(trace, 5000, random_seed=SEED)
###Output
100%|██████████| 5000/5000 [02:09<00:00, 38.54it/s]
###Markdown
We see that the posterior predictive samples have a distribution quite close to that of the observed data.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(ppc_trace['x_obs'], bins=30, density=True,
histtype='step', lw=2,
color=['.5'] * ppc_trace['x_obs'].shape[1],
alpha=.05,
label='Posterior predictive distribution');
ax.hist(x, bins=30, density=True,
histtype='step', lw=2,
label='Observed data');
ax.legend(loc=1);
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
pymc3 3.8
arviz 0.8.3
numpy 1.17.5
last updated: Thu Jun 11 2020
CPython 3.8.2
IPython 7.11.0
watermark 2.0.2
###Markdown
Marginalized Gaussian Mixture ModelAuthor: [Austin Rochford](http://austinrochford.com)
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
SEED = 383561
np.random.seed(SEED) # from random.org, for reproducibility
###Output
_____no_output_____
###Markdown
Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.
###Code
N = 1000
W = np.array([0.35, 0.4, 0.25])
MU = np.array([0., 2., 5.])
SIGMA = np.array([0.5, 0.5, 1.])
component = np.random.choice(MU.size, size=N, p=W)
x = np.random.normal(MU[component], SIGMA[component], size=N)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True, lw=0);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
A natural parameterization of the Gaussian mixture model is as the [latent variable model](https://en.wikipedia.org/wiki/Latent_variable_model)$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\z\ |\ \boldsymbol{w} & \sim \textrm{Cat}(\boldsymbol{w}) \\x\ |\ z & \sim N(\mu_z, \tau^{-1}_z).\end{align*}$$An implementation of this parameterization in PyMC3 is available [here](http://pymc-devs.github.io/pymc3/notebooks/gaussian_mixture_model.html). A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.An alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\f(x\ |\ \boldsymbol{w}) & = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_z),\end{align*}$$where$$N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)$$is the probability density function of the normal distribution.Marginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the [Stan](http://mc-stan.org/) community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the [_Stan User's Guide and Reference Manual_](http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/Manuals/stan-reference-2.8.0.pdf).PyMC3 supports marginalized Gaussian mixture models through its `NormalMixture` class. (It also supports marginalized general mixture models through its `Mixture` class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.
###Code
with pm.Model() as model:
w = pm.Dirichlet('w', np.ones_like(W))
mu = pm.Normal('mu', 0., 10., shape=W.size)
tau = pm.Gamma('tau', 1., 1., shape=W.size)
x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)
with model:
trace = pm.sample(5000, n_init=10000, tune=1000, random_seed=SEED)[1000:]
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using advi...
Average ELBO = -6,663.8: 100%|██████████| 10000/10000 [00:06<00:00, 1582.50it/s]
Finished [100%]: Average ELBO = -6,582.7
100%|██████████| 5000/5000 [-1:54:12<00:00, -0.07s/it]
###Markdown
We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.
###Code
pm.traceplot(trace, varnames=['w', 'mu']);
pm.plot_posterior(trace, varnames=['w', 'mu']);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
We can also sample from the model's posterior predictive distribution, as follows.
###Code
with model:
ppc_trace = pm.sample_ppc(trace, 5000, random_seed=SEED)
###Output
100%|██████████| 5000/5000 [03:28<00:00, 23.93it/s]
###Markdown
We see that the posterior predictive samples have a distribution quite close to that of the observed data.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True,
histtype='step', lw=2,
label='Observed data');
ax.hist(ppc_trace['x_obs'], bins=30, normed=True,
histtype='step', lw=2,
label='Posterior predictive distribution');
ax.legend(loc=1);
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
Marginalized Gaussian Mixture ModelAuthor: [Austin Rochford](http://austinrochford.com)
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
SEED = 383561
np.random.seed(SEED) # from random.org, for reproducibility
###Output
_____no_output_____
###Markdown
Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.
###Code
N = 1000
W = np.array([0.35, 0.4, 0.25])
MU = np.array([0., 2., 5.])
SIGMA = np.array([0.5, 0.5, 1.])
component = np.random.choice(MU.size, size=N, p=W)
x = np.random.normal(MU[component], SIGMA[component], size=N)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True, lw=0);
###Output
_____no_output_____
###Markdown
A natural parameterization of the Gaussian mixture model is as the [latent variable model](https://en.wikipedia.org/wiki/Latent_variable_model)$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\z\ |\ \boldsymbol{w} & \sim \textrm{Cat}(\boldsymbol{w}) \\x\ |\ z & \sim N(\mu_z, \tau^{-1}_z).\end{align*}$$An implementation of this parameterization in PyMC3 is available [here](http://pymc-devs.github.io/pymc3/notebooks/gaussian_mixture_model.html). A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.An alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is$$\begin{align*}\mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\\tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\\boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\f(x\ |\ \boldsymbol{w}) & = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_z),\end{align*}$$where$$N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)$$is the probability density function of the normal distribution.Marginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the [Stan](http://mc-stan.org/) community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the [_Stan User's Guide and Reference Manual_](http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/Manuals/stan-reference-2.8.0.pdf).PyMC3 supports marginalized Gaussian mixture models through its `NormalMixture` class. (It also supports marginalized general mixture models through its `Mixture` class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.
###Code
with pm.Model() as model:
w = pm.Dirichlet('w', np.ones_like(W))
mu = pm.Normal('mu', 0., 10., shape=W.size)
tau = pm.Gamma('tau', 1., 1., shape=W.size)
x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)
with model:
step = pm.Metropolis()
trace_ = pm.sample(20000, step, random_seed=SEED)
trace = trace_[10000::10]
###Output
100%|██████████| 20000/20000 [00:23<00:00, 835.32it/s]
###Markdown
We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.
###Code
pm.traceplot(trace, varnames=['w', 'mu']);
###Output
_____no_output_____
###Markdown
We can also sample from the model's posterior predictive distribution, as follows.
###Code
with model:
ppc_trace = pm.sample_ppc(trace, 5000, random_seed=SEED)
###Output
100%|██████████| 5000/5000 [00:41<00:00, 120.43it/s]
###Markdown
We see that the posterior predictive samples have a distribution quite close to that of the observed data.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True,
histtype='step', lw=2,
label='Observed data');
ax.hist(ppc_trace['x_obs'], bins=30, normed=True,
histtype='step', lw=2,
label='Posterior predictive distribution');
ax.legend(loc=1);
###Output
_____no_output_____ |
docs/32_tiled_image_processing/tiled_object_measurements.ipynb | ###Markdown
Measurements in objects in tiled imagesFor some specific image analysis tasks it might be possible to overcome limitations such as when applying connected component labeling. For example, when measuring the size of objects and if these objects are limited in size, it is not necessary to combine intermediate image processing results in big images.We could just measure object properties for all objects in tiles and then combine the result of the quantification.
###Code
import numpy as np
import dask
import dask.array as da
from skimage.data import cells3d
from skimage.io import imread
import pyclesperanto_prototype as cle
from pyclesperanto_prototype import imshow
###Output
_____no_output_____
###Markdown
Our starting point is again a binary image showing segmented objects.
###Code
image = imread("../../data/blobs.tif") > 128
imshow(image)
###Output
_____no_output_____
###Markdown
This time, we would like to measure the size of the objects and visualize that in a parametric image. For demonstratopn purposes, we execute that operation first on the whole example image.
###Code
def area_map(image):
"""
Label objects in a binary image and produce a pixel-count-map image.
"""
labels = cle.connected_components_labeling_box(image)
result = cle.pixel_count_map(labels)
return np.asarray(result)
reference = area_map(image)
cle.imshow(reference, colorbar=True)
###Output
_____no_output_____
###Markdown
If we process the same in tiles, we will get slightly wrong results because of the tiled connected-component-labeling issue demonstated earlier.
###Code
# tile the image
tiles = da.from_array(image, chunks=(128, 128))
# setup the operation we want to apply
procedure = area_map
# setup the tiling
tile_map = da.map_blocks(procedure, tiles)
# compute result
result = tile_map.compute()
# visualize
imshow(result, colorbar=True)
###Output
_____no_output_____
###Markdown
Again, the errors are visible at the border and we can visualize that by direct comparison:
###Code
absolute_error = cle.absolute_difference(result, reference)
cle.imshow(absolute_error, colorbar=True)
###Output
_____no_output_____
###Markdown
To prevent this error, we need to think again about processing the image tiles with an overlap. In this particular example, we are not executing any operation that takes neighboring pixels into account. Hence, we cannot estimate the necessary overlap from such parameters. We need to take the maximum size (diameter) of the objects into account. We could also do this emprically, as before. Therefore, let's compute the mean squared error, first of the two example results above:
###Code
cle.mean_squared_error(result, reference)
###Output
_____no_output_____
###Markdown
And next, we can compute that error in a loop varying the overlap using [dask.array.map_overlay](https://docs.dask.org/en/stable/array-overlap.html) size while processing the image in tiles. Note that we're setting `boundary=0` here, because otherwise objects would extend in the binary image and size measurements would be wrong.
###Code
for overlap_width in range(0, 30, 5):
print("Overlap width", overlap_width)
tile_map = da.map_overlap(procedure, tiles, depth=overlap_width, boundary=0)
result = tile_map.compute()
print("mean squared error", cle.mean_squared_error(result, reference))
print("-----------------------------------")
###Output
Overlap width 0
mean squared error 4338.783956692913
-----------------------------------
Overlap width 5
mean squared error 1702.8293553149606
-----------------------------------
Overlap width 10
mean squared error 460.85811392716533
-----------------------------------
Overlap width 15
mean squared error 70.78670952263779
-----------------------------------
Overlap width 20
mean squared error 1.2793891486220472
-----------------------------------
Overlap width 25
mean squared error 0.0
-----------------------------------
###Markdown
The empirically determined overlap where this error becomes 0 is an optimistic estimation. When using this method in you example, make sure you apply a overlap that's larger than the determined value. **Note:** The `compute` and `imshow` functions may not work on big datasets as the images may not fit in computer memory. We are using it here for demonstration purposes.
###Code
overlap_width = 30
tile_map = da.map_overlap(procedure, tiles, depth=overlap_width, boundary=0)
result = tile_map.compute()
cle.imshow(tile_map, colorbar=True)
###Output
_____no_output_____ |
_notebooks/2020-10-01-StockMarketPortfolioAnaylsis_snp+.ipynb | ###Markdown
Stock Market and Portfolio Anaylsis of the S&P 500 various metrics with pandas and quandl This post includes code adapted from [python for finance and trading algorithms udemy course](https://udemy.com/python-for-finance-and-trading-algorithms/) and [python for finance and trading algorithms udemy course notebooks](https://github.com/theoneandonlywoj/Python-for-Financial-Analysis-and-Algorithmic-Trading).
###Code
import pandas as pd
import quandl
start = pd.to_datetime('2010-01-01')
end = pd.to_datetime('today')
SP500 = quandl.get("MULTPL/SP500_REAL_PRICE_MONTH",start_date = start,end_date = end)
SP500 = quandl.get("MULTPL/SP500_DIV_YIELD_MONTH",start_date = start,end_date = end)
SP500
SP500 = quandl.get("MULTPL/SP500_PE_RATIO_MONTH",start_date = start,end_date = end)
SP500
SP500 = quandl.get("MULTPL/SP500_PE_RATIO_MONTH",start_date = start,end_date = end)
SP500
SP500 = quandl.get("MULTPL/SP500_EARNINGS_YIELD_MONTH",start_date = start,end_date = end)
SP500
SP500 = quandl.get("MULTPL/SP500_INFLADJ_MONTH",start_date = start,end_date = end)
SP500
SP500 = quandl.get("MULTPL/SP500_DIV_MONTH",start_date = start,end_date = end)
SP500
SP500 = quandl.get("MULTPL/SP500_DIV_YEAR",start_date = start,end_date = end)
SP500
SP500 = quandl.get("MULTPL/SP500_DIV_GROWTH_YEAR",start_date = start,end_date = end)
SP500
SP500 = quandl.get("MULTPL/SP500_DIV_GROWTH_QUARTER",start_date = start,end_date = end)
SP500
SP500.head()
SP500.tail()
import matplotlib.pyplot as plt
%matplotlib inline
SP500['Value'].plot(figsize = (12, 8))
plt.title('Total S&P 500 from 2010 to 2020 Value')
###Output
_____no_output_____ |
frequent_words_chinese/frequent_words_chinese.ipynb | ###Markdown
Chinese Word Analysis IntroductionLearning a language is a laborious task as there are too many words to learn. However, a (frequent) set of words cover a high percentage of the text used.In this document, we want to analyze how much this percentage increase as we add words to our knowledge.For this analysis, the Chinese language has been selected. It is a particularly interesting language for this study because it gives us the possibility to analyze the language not only using the words but also using Chinese characters. Words are formed by one or more characters, and each character can belong to one or more words. The dataThe data has been obtained from https://invokeit.wordpress.com/frequency-word-lists/. It contains a list of words and the number of occurrences. The words are ordered from the most frequents to the least.Executing the following script we download the data and clean it (remove non-Chinese words). The last command process the data to obtain a list of occurrences of characters (not words). For doing that, every word is split and the number of occurrences is recalculated.
###Code
options(repr.plot.width=15, repr.plot.height=10)
#unzip data
system('sh get_preprocess_data.sh')
###Output
_____no_output_____
###Markdown
We have now two data sets: * zh_cn_clean.txt: Frequency list of chinese words, in descending order. * zh_cn_characters.txt: Frequency list of chinese characters, in descending order.We are loading these datasets in order to perform our analysis.
###Code
#load data
data.words<-read.table('data/zh_cn_clean.txt')
data.chars<-read.table('data/zh_cn_characters.txt',quote="")
###Output
_____no_output_____
###Markdown
AnalysisIn this document, we are going to analyse the number of words needed to cover the Chinese language.First, we calculate the probability (in percentage) of each word/character appearing in the Chinese text. Then we perform a cumulative summation and plot it.
###Code
frequencies.words<-data.words[,2]
total.words<-sum(frequencies.words)
percent.words<-(frequencies.words/total.words)*100
acumPercent.words<-cumsum(percent.words)
frequencies.chars<-data.chars[,2]
total.chars<-sum(frequencies.chars)
percent.chars<-(frequencies.chars/total.chars)*100
acumPercent.chars<-cumsum(percent.chars)
#plot acumulates
par(mfrow=c(2,1))
plot(acumPercent.words,ylim=c(0,100),type="l",xlab="Words known",ylab="Percentage covered")
plot(acumPercent.chars,ylim=c(0,100),col="orange",type="l",xlab="Characters known",ylab="Percentage covered")
###Output
_____no_output_____
###Markdown
The plots represent the amounts of words (black line) and characters (orange line) needed to cover a percentage of chinese language.As we can see, the first most frequent characters covers almost 90% of chinese language. It means that a few words are necessary to learn in order to understand the most part of the texts.We may want to plot first 3000 words (black line) and first 3000 characters (orange line).
###Code
plot(acumPercent.words[1:3000],ylim=c(0,100),type="l",xlab="Words (black)/Characters (orange)",ylab="Percentage covered")
lines(acumPercent.chars[1:3000],col="orange")
###Output
_____no_output_____
###Markdown
Now we have an idea of the different amount of words needed to understand a chinese text. SummaryAs summary, we remark the following:* About the words:
###Code
print( paste("100 words cover",toString(round(acumPercent.words[100],1)),"% of the language" ,sep=" ") )
print( paste("500 words cover",toString(round(acumPercent.words[500],1)),"% of the language" ,sep=" ") )
print( paste("1000 words cover",toString(round(acumPercent.words[1000],1)),"% of the language" ,sep=" ") )
print( paste("3000 words cover",toString(round(acumPercent.words[3000],1)),"% of the language" ,sep=" ") )
print( paste("5000 words cover",toString(round(acumPercent.words[5000],1)),"% of the language" ,sep=" ") )
###Output
[1] "100 words cover 48.1 % of the language"
[1] "500 words cover 69.8 % of the language"
[1] "1000 words cover 78.7 % of the language"
[1] "3000 words cover 89.7 % of the language"
[1] "5000 words cover 93.1 % of the language"
###Markdown
* About the characters:
###Code
print( paste("100 characters cover",toString(round(acumPercent.chars[100],1)),"% of the language" ,sep=" ") )
print( paste("500 characters cover",toString(round(acumPercent.chars[500],1)),"% of the language" ,sep=" ") )
print( paste("1000 characters cover",toString(round(acumPercent.chars[1000],1)),"% of the language" ,sep=" ") )
print( paste("3000 characters cover",toString(round(acumPercent.chars[3000],1)),"% of the language" ,sep=" ") )
print( paste("5000 characters cover",toString(round(acumPercent.chars[5000],1)),"% of the language" ,sep=" ") )
###Output
[1] "100 characters cover 55.5 % of the language"
[1] "500 characters cover 82.7 % of the language"
[1] "1000 characters cover 92.4 % of the language"
[1] "3000 characters cover 99.4 % of the language"
[1] "5000 characters cover 99.9 % of the language"
###Markdown
* Words/character needed:
###Code
print( paste("To cover 50% of the language",toString(which.min(abs(acumPercent.words - 50))),"words are needed (or",which.min(abs(acumPercent.chars - 50)),"characters)." ,sep=" ") )
print( paste("To cover 60% of the language",toString(which.min(abs(acumPercent.words - 60))),"words are needed (or",which.min(abs(acumPercent.chars - 60)),"characters)." ,sep=" ") )
print( paste("To cover 80% of the language",toString(which.min(abs(acumPercent.words - 80))),"words are needed (or",which.min(abs(acumPercent.chars - 80)),"characters)." ,sep=" ") )
print( paste("To cover 90% of the language",toString(which.min(abs(acumPercent.words - 90))),"words are needed (or",which.min(abs(acumPercent.chars - 90)),"characters)." ,sep=" ") )
###Output
[1] "To cover 50% of the language 116 words are needed (or 72 characters)."
[1] "To cover 60% of the language 243 words are needed (or 131 characters)."
[1] "To cover 80% of the language 1121 words are needed (or 421 characters)."
[1] "To cover 90% of the language 3129 words are needed (or 825 characters)."
|
week-2-multiple-regression-assignment-2-blank.ipynb | ###Markdown
Regression Week 2: Multiple Regression (gradient descent) In the first notebook we explored multiple regression using graphlab create. Now we will use graphlab along with numpy to solve for the regression weights with gradient descent.In this notebook we will cover estimating multiple regression weights via gradient descent. You will:* Add a constant column of 1's to a graphlab SFrame to account for the intercept* Convert an SFrame into a Numpy array* Write a predict_output() function using Numpy* Write a numpy function to compute the derivative of the regression weights with respect to a single feature* Write gradient descent function to compute the regression weights given an initial weight vector, step size and tolerance.* Use the gradient descent function to estimate regression weights for multiple features Fire up graphlab create Make sure you have the latest version of graphlab (>= 1.7)
###Code
import graphlab
###Output
_____no_output_____
###Markdown
Load in house sales dataDataset is from house sales in King County, the region where the city of Seattle, WA is located.
###Code
sales = graphlab.SFrame('kc_house_data.gl/')
###Output
This non-commercial license of GraphLab Create for academic use is assigned to [email protected] and will expire on March 08, 2019.
###Markdown
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features. Convert to Numpy Array Although SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional "array").Recall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for *all* the observations can be computed by right multiplying the "feature matrix" by the "weight vector". First we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.
###Code
import numpy as np # note this allows us to refer to numpy as np instead
###Output
_____no_output_____
###Markdown
Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things:* A numpy matrix whose columns are the desired features plus a constant column (this is how we create an 'intercept')* A numpy array containing the values of the outputWith this in mind, complete the following function (where there's an empty line you should write a line of code that does what the comment above indicates)**Please note you will need GraphLab Create version at least 1.7.1 in order for .to_numpy() to work!**
###Code
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = graphlab.SFrame(data_sframe[features])
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
###Output
_____no_output_____
###Markdown
For testing let's use the 'sqft_living' feature and a constant as our features and price as our output:
###Code
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list
print example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'
print example_output[0] # and the corresponding output
###Output
[1.00e+00 1.18e+03]
221900.0
###Markdown
Predicting output given regression weights Suppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0\*1.0 + 1.0\*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this:
###Code
my_weights = np.array([1., 1.]) # the example weights
my_features = example_features[0,] # we'll use the first data point
predicted_value = np.dot(my_features, my_weights)
print predicted_value
###Output
1181.0
###Markdown
np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features *matrix* and the weights *vector*. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights:
###Code
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
predictions = np.dot(feature_matrix,weights)
# create the predictions vector by using np.dot()
return(predictions)
###Output
_____no_output_____
###Markdown
If you want to test your code run the following cell:
###Code
test_predictions = predict_output(example_features, my_weights)
print test_predictions[0] # should be 1181.0
print test_predictions[1] # should be 2571.0
###Output
1181.0
2571.0
###Markdown
Computing the Derivative We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.Since the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows:(w[0]\*[CONSTANT] + w[1]\*[feature_1] + ... + w[i] \*[feature_i] + ... + w[k]\*[feature_k] - output)^2Where we have k features and a constant. So the derivative with respect to weight w[i] by the chain rule is:2\*(w[0]\*[CONSTANT] + w[1]\*[feature_1] + ... + w[i] \*[feature_i] + ... + w[k]\*[feature_k] - output)\* [feature_i]The term inside the paranethesis is just the error (difference between prediction and output). So we can re-write this as:2\*error\*[feature_i]That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself. In the case of the constant then this is just twice the sum of the errors!Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors. With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points).
###Code
def feature_derivative(errors, feature):
# Assume that errors and feature are both numpy arrays of the same length (number of data points)
# compute twice the dot product of these vectors as 'derivative' and return the value
derivative = 2 *np.dot(errors, feature)
return(derivative)
###Output
_____no_output_____
###Markdown
To test your feature derivartive run the following:
###Code
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([0., 0.]) # this makes all the predictions 0
test_predictions = predict_output(example_features, my_weights)
# just like SFrames 2 numpy arrays can be elementwise subtracted with '-':
errors = test_predictions - example_output # prediction errors in this case is just the -example_output
feature = example_features[:,0] # let's compute the derivative with respect to 'constant', the ":" indicates "all rows"
derivative = feature_derivative(errors, feature)
print derivative
print -np.sum(example_output)*2 # should be the same as derivative
###Output
-23345850022.0
-23345850022.0
###Markdown
Gradient Descent Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of *increase* and therefore the negative gradient is the direction of *decrease* and we're trying to *minimize* a cost function. The amount by which we move in the negative gradient *direction* is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria
###Code
from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights) # make sure it's a numpy array
while not converged:
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
error = predictions - output
gradient_sum_squares = 0 # initialize the gradient sum of squares
# while we haven't reached the tolerance yet, update each feature's weight
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
# compute the derivative for weight[i]:
derivative = feature_derivative(error, feature_matrix[:, i])
# add the squared value of the derivative to the gradient sum of squares (for assessing convergence)
gradient_sum_squares+= (derivative**2)
# subtract the step size times the derivative from the current weight
weights[i] = weights[i]- (step_size * derivative)
# compute the square-root of the gradient sum of squares to get the gradient magnitude:
gradient_magnitude = sqrt(gradient_sum_squares)
if gradient_magnitude < tolerance:
converged = True
return(weights)
###Output
_____no_output_____
###Markdown
A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features. For similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values. Running the Gradient Descent as Simple Regression First let's split the data into training and test data.
###Code
train_data,test_data = sales.random_split(.8,seed=0)
###Output
_____no_output_____
###Markdown
Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model:
###Code
# let's test out the gradient descent
simple_features = ['sqft_living']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
initial_weights = np.array([-47000., 1.])
step_size = 7e-12
tolerance = 2.5e7
###Output
_____no_output_____
###Markdown
Next run your gradient descent with the above parameters.
###Code
simple_weights = regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, tolerance)
print simple_weights
print round(simple_weights[1],1)
###Output
[-46999.88716555 281.91211912]
281.9
###Markdown
How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)? **Quiz Question: What is the value of the weight for sqft_living -- the second element of ‘simple_weights’ (rounded to 1 decimal place)?** Use your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first:
###Code
(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
###Output
_____no_output_____
###Markdown
Now compute your predictions using test_simple_feature_matrix and your weights from above.
###Code
test_predictions = predict_output(test_simple_feature_matrix, simple_weights)
print test_predictions
###Output
[356134.44317093 784640.86422788 435069.83652353 ... 663418.65300782
604217.10799338 240550.4743332 ]
###Markdown
**Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)?**
###Code
print test_predictions[0]
###Output
356134.4431709297
###Markdown
Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
###Code
test_error = test_predictions-test_output
RSS = sum(test_error*test_error)
print RSS
###Output
275400047593155.7
###Markdown
Running a multiple regression Now we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters:
###Code
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
initial_weights = np.array([-100000., 1., 1.])
step_size = 4e-12
tolerance = 1e9
###Output
_____no_output_____
###Markdown
Use the above parameters to estimate the model weights. Record these values for your quiz.
###Code
model_weights =regression_gradient_descent(feature_matrix, output, initial_weights,step_size, tolerance)
print model_weights
###Output
[-9.99999688e+04 2.45072603e+02 6.52795277e+01]
###Markdown
Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
###Code
(test_model_feature_matrix,test_output)=get_numpy_data(test_data,model_features,my_output)
mod_predictions = predict_output(test_model_feature_matrix, model_weights)
###Output
_____no_output_____
###Markdown
**Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)?**
###Code
print mod_predictions[0]
###Output
366651.4120365591
###Markdown
What is the actual price for the 1st house in the test data set?
###Code
print test_data['price'][0]
###Output
310000.0
###Markdown
**Quiz Question: Which estimate was closer to the true price for the 1st house on the TEST data set, model 1 or model 2?** Now use your predictions and the output to compute the RSS for model 2 on TEST data.
###Code
mod_test_errors = mod_predictions-test_output
RSSmod = sum(mod_test_errors*mod_test_errors)
print RSSmod
###Output
270263446465244.03
###Markdown
**Quiz Question: Which model (1 or 2) has lowest RSS on all of the TEST data? **
###Code
RSSmod>RSS
###Output
_____no_output_____ |
Copia_de_inference_playground.ipynb | ###Markdown
SAM: Inference Playground
###Code
import os
os.chdir('/content')
CODE_DIR = 'SAM'
!git clone https://github.com/yuval-alaluf/SAM.git $CODE_DIR
!wget https://github.com/ninja-build/ninja/releases/download/v1.8.2/ninja-linux.zip
!sudo unzip ninja-linux.zip -d /usr/local/bin/
!sudo update-alternatives --install /usr/bin/ninja ninja /usr/local/bin/ninja 1 --force
os.chdir(f'./{CODE_DIR}')
from argparse import Namespace
import os
import sys
import pprint
import numpy as np
from PIL import Image
import torch
import torchvision.transforms as transforms
sys.path.append(".")
sys.path.append("..")
from datasets.augmentations import AgeTransformer
from utils.common import tensor2im
from models.psp import pSp
EXPERIMENT_TYPE = 'ffhq_aging'
###Output
_____no_output_____
###Markdown
Step 1: Download Pretrained ModelAs part of this repository, we provide our pretrained aging model.We'll download the model for the selected experiments as save it to the folder `../pretrained_models`.
###Code
def get_download_model_command(file_id, file_name):
""" Get wget download command for downloading the desired model and save to directory ../pretrained_models. """
current_directory = os.getcwd()
save_path = os.path.join(os.path.dirname(current_directory), "pretrained_models")
if not os.path.exists(save_path):
os.makedirs(save_path)
url = r"""wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id={FILE_ID}' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id={FILE_ID}" -O {SAVE_PATH}/{FILE_NAME} && rm -rf /tmp/cookies.txt""".format(FILE_ID=file_id, FILE_NAME=file_name, SAVE_PATH=save_path)
return url
MODEL_PATHS = {
"ffhq_aging": {"id": "1XyumF6_fdAxFmxpFcmPf-q84LU_22EMC", "name": "sam_ffhq_aging.pt"}
}
path = MODEL_PATHS[EXPERIMENT_TYPE]
download_command = get_download_model_command(file_id=path["id"], file_name=path["name"])
!wget {download_command}
###Output
_____no_output_____
###Markdown
Step 2: Define Inference Parameters Below we have a dictionary defining parameters such as the path to the pretrained model to use and the path to theimage to perform inference on.While we provide default values to run this script, feel free to change as needed.
###Code
EXPERIMENT_DATA_ARGS = {
"ffhq_aging": {
"model_path": "../pretrained_models/sam_ffhq_aging.pt",
"image_path": "notebooks/images/asif.jpeg",
"transform": transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
}
}
EXPERIMENT_ARGS = EXPERIMENT_DATA_ARGS[EXPERIMENT_TYPE]
###Output
_____no_output_____
###Markdown
Step 3: Load Pretrained ModelWe assume that you have downloaded the pretrained aging model and placed it in the path defined above
###Code
model_path = EXPERIMENT_ARGS['model_path']
ckpt = torch.load(model_path, map_location='cpu')
opts = ckpt['opts']
pprint.pprint(opts)
# update the training options
opts['checkpoint_path'] = model_path
opts = Namespace(**opts)
net = pSp(opts)
net.eval()
net.cuda()
print('Model successfully loaded!')
###Output
_____no_output_____
###Markdown
Step 4: Visualize Input
###Code
image_path = EXPERIMENT_DATA_ARGS[EXPERIMENT_TYPE]["image_path"]
original_image = Image.open(image_path).convert("RGB")
original_image.resize((256, 256))
###Output
_____no_output_____
###Markdown
Step 5: Perform Inference Align ImageBefore running inference we'll run alignment on the input image.
###Code
!wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
!bzip2 -dk shape_predictor_68_face_landmarks.dat.bz2
def run_alignment(image_path):
import dlib
from scripts.align_all_parallel import align_face
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
aligned_image = align_face(filepath=image_path, predictor=predictor)
print("Aligned image has shape: {}".format(aligned_image.size))
return aligned_image
aligned_image = run_alignment(image_path)
aligned_image.resize((256, 256))
###Output
_____no_output_____
###Markdown
Run Inference
###Code
img_transforms = EXPERIMENT_ARGS['transform']
input_image = img_transforms(aligned_image)
# we'll run the image on multiple target ages
target_ages = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
age_transformers = [AgeTransformer(target_age=age) for age in target_ages]
def run_on_batch(inputs, net):
result_batch = net(inputs.to("cuda").float(), randomize_noise=False, resize=False)
return result_batch
# for each age transformed age, we'll concatenate the results to display them side-by-side
results = np.array(aligned_image.resize((1024, 1024)))
for age_transformer in age_transformers:
print(f"Running on target age: {age_transformer.target_age}")
with torch.no_grad():
input_image_age = [age_transformer(input_image.cpu()).to('cuda')]
input_image_age = torch.stack(input_image_age)
result_tensor = run_on_batch(input_image_age, net)[0]
result_image = tensor2im(result_tensor)
results = np.concatenate([results, result_image], axis=1)
###Output
_____no_output_____
###Markdown
Visualize Result
###Code
results = Image.fromarray(results)
results # this is a very large image (11*1024 x 1024) so it may take some time to display!
# save image at full resolution
results.save("notebooks/images/age_transformed_image.jpg")
###Output
_____no_output_____ |
nbs/dl1/Haider-ubt-lesson4-tabular-v1.ipynb | ###Markdown
Tabular models
###Code
from fastai import *
from fastai.tabular import *
torch.cuda.set_device(1)
###Output
_____no_output_____
###Markdown
Tabular data should be in a Pandas `DataFrame`.
###Code
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
dep_var = '>=50k'
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [FillMissing, Categorify, Normalize]
test = TabularList.from_df(df.iloc[800:1000].copy(), path=path, cat_names=cat_names, cont_names=cont_names)
data = (TabularList.from_df(df, path=path, cat_names=cat_names, cont_names=cont_names, procs=procs)
.split_by_idx(list(range(800,1000)))
.label_from_df(cols=dep_var)
.add_test(test, label=0)
.databunch())
data.show_batch(rows=10)
learn = tabular_learner(data, layers=[200,100], metrics=accuracy)
learn.fit(1, 1e-2)
###Output
_____no_output_____
###Markdown
Inference
###Code
row = df.iloc[0]
learn.predict(row)
###Output
_____no_output_____ |
tests/Javier-BayesOpt.ipynb | ###Markdown
Tutorial in Bayesian Optimization Javier Gonzalez ([email protected]) University of Sheffield. The basics Bayesian optimization (BO) is an strategy for global optimization of black-box functions. For instance, consider a Lipschitz continuous function $f(x)$ defined on a domain $\mathcal{X}$. BO aims to obtain$$x^* = \arg \max_{\mathcal{X}} f(x)$$There are two crucial bits in any Bayesian Optimization (BO) procedure approach.* Define a **prior probability measure** on $f$: this function will capture the our prior beliefs on $f$. The prior will be updated to a 'posterior' using the available data.* Define an **acquisition function** $acqu(x)$: this is a criteria to decide where to sample next in order to gain the maximum information about the location of the global maximum of $f$.Given a prior over the function $f$ and an acquisition function a BO procedure will converge to the optimum of $f$ under some conditions. Use of Bayesian Optimization in real applications BO has been applied to solve a wide range of problems such us: Interactive animation, Sensor networks,Automatic algorithm configuration, Automatic machine learning toolboxes, Reinforcement learning, Organization planning, Deep learning, Engineering and a long etc! 1D-Toy illustration We illustrate the idea behind BO using a one-dimensional example. We start by importing the required libraries for the analysis. Note that we use our library GPy for Gaussian Processes! The on-line documentation of GPy is available from the [SheffieldML github page](https://github.com/SheffieldML/GPy).
###Code
%matplotlib inline
import GPy
import pylab as pb
import numpy as np
import matplotlib.pyplot as plt # plots
import scipy.stats
from scipy.stats import norm
###Output
_____no_output_____
###Markdown
Let's start by considering the function $f(x) = − \cos(2πx ) + \sin(4\pi x )$ defined on the interval $[0.08, 0.92]$. The maximum of this function is located at $x_{max}=0.6010$. Obviously, to obtain it in this example is trivial. But, what if $f$ is not explicitly available and we only have access to a small number of noise evaluations? We see how the BO acts in this case for illustrative purpose but, of course, BO can be used in more complex scenarios. We first generate 3 noisy observations sampled from $f$ and we proceed.
###Code
## Function f(x)
X_star = np.linspace(0,1,1000)[:,None]
Y_star = -np.cos(2*np.pi*X_star) + np.sin(4*np.pi*X_star)
X_eval = X_star
# Sampled values
np.random.seed([1])
n = 3
X = np.array([[0.09],[0.2],[0.8]])
Y = -np.cos(2*np.pi*X) + np.sin(4*np.pi*X) + np.random.randn(n,1)*0.1
# Plot of f(x) and the generated sample
plt.rcParams['figure.figsize'] = 8, 5
plt.figure();
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.plot(X,Y,'kx',mew=1.5)
plt.xlabel('x')
plt.ylabel('f(x)')
plt.savefig('data.pdf')
###Output
_____no_output_____
###Markdown
3.1 Gaussian Process Prior Now we define a Gaussian Process (GP) prior on $f$. A GP is an extension of the multivariate Gaussian distribution to an infinite dimension stochastic process for which any finite combination of dimensions is a Gaussian distribution. Therefore a GP is a distribution over functions, which is totally specified by its mean function $m(x)$ and its covariance function $k(x,x')$:$$f(x) \sim \mathcal{GP}(m(x),k(x,x')) $$For convenience, the mean is often fixed as $m(x)=0$. We use as covariance function the exponentiated quadratic kernel$$ k(x,x') = l \cdot exp{ \left(\frac{\|x-x'\|}{2\sigma^2}\right)} $$where $\sigma^2$ and and $l$ are positive parameters. Next, we fit this model in our dataset. We start by a kernel object.
###Code
# Choose the kernel
k = GPy.kern.RBF(input_dim=1, variance=1, lengthscale=0.1)
###Output
_____no_output_____
###Markdown
Now we create a Gaussian Process model using as covariance function the the previous kernel and we optimize the parameters by maximizing the log-likelihood. Ir order to avoid local solutions we use 10 different initial points in the optimization process.
###Code
# We create the GP model
m = GPy.models.GPRegression(X, Y, k)
m.optimize()
m.optimize_restarts(num_restarts = 10)
###Output
Optimization restart 1/10, f = 2.98700489767
Optimization restart 2/10, f = 3.28304125657
Optimization restart 3/10, f = 3.28304173249
Optimization restart 4/10, f = 2.98700487589
Optimization restart 5/10, f = 2.98700492174
Optimization restart 6/10, f = 2.98700488549
Optimization restart 7/10, f = 2.98700487742
Optimization restart 8/10, f = 2.98700487575
Optimization restart 9/10, f = 3.28304149527
Optimization restart 10/10, f = 3.28304124591
###Markdown
Now, it is time to have a look to the fitted model. We show the parameters and the fitted function and a plot to see how it fit the data.
###Code
print m
#m.plot()
fest = m.predict(X_star)
plt.plot(X_star,fest[0],c='blue',lw=2,ls='-',mew=1.5)
plt.plot(X_star,fest[0]+1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X_star,fest[0]-1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X,Y,'kx',mew=2.5)
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.title('GP model')
plt.xlabel('x')
plt.ylabel('f(x)')
plt.xlim(0,1)
plt.savefig('datamodel.pdf')
###Output
Name : GP regression
Objective : 2.98700487575
Number of Parameters : 3
Number of Optimization Parameters : 3
Updates : True
Parameters:
[1mGP_regression. [0;0m | value | constraints | priors
[1mrbf.variance [0;0m | 0.655167167518 | +ve |
[1mrbf.lengthscale [0;0m | 0.191682159649 | +ve |
[1mGaussian_noise.variance[0;0m | 6.74045548918e-09 | +ve |
###Markdown
Given this model, where do you think the maximum of the function should be? Around 0.3 the posterior mean is maximum but around 0.45 the variance is large. If you could collect a new data point, where would you do it? This is the job of the second main element of any Bayesian Optimization procedure: the acquisition function. 3.2 Aquisition functions Next lines of code define the three acquisition functions we are going to use in our example. They are the functions that represents our beliefs over the maximum of $f(x)$. Denote by $\theta$ the parameters of the GP model and by $\{x_i,y_i\}$ the available sample. Three of the most common acquisition functions are:* **Maximum probability of improvement (MPI)**:$$acqu_{MPI}(x;\{x_n,y_n\},\theta) = \Phi(\gamma(x)), \mbox{where}\ \gamma(x)=\frac{\mu(x;\{x_n,y_n\},\theta)-f(x_{best})-\psi}{\sigma(x;\{x_n,y_n\},\theta)}.$$* **Expected improvement (EI)**:$$acqu_{EI}(x;\{x_n,y_n\},\theta) = \sigma(x;\{x_n,y_n\},\theta) (\gamma(x) \Phi(\gamma(x))) + N(\gamma(x);0,1).$$* **Upper confidence bound (UCB)**:$$acqu_{UCB}(x;\{x_n,y_n\},\theta) = \mu(x;\{x_n,y_n\},\theta)+\eta\sigma(x;\{x_n,y_n\},\theta).$$Both, $\psi$ and $\eta$, are tunable parameters that help to make the acquisition functions more flexible. Also, in the case of the UBC, the parameter $\eta$ is useful to define the balance between the importance we give to the mean and the variance of the model. This is know as the **exploration/exploitation trade off**.
###Code
def MPI_max(x,model,par = 0.01):
fest = model.predict(x)
acqu = norm.cdf((fest[0]-max(fest[0])-par) / fest[1])
return acqu
def EI_max(x,model,par = 0.01):
fest = model.predict(x)
Z = (fest[0]-max(fest[0])-par) / fest[1]
acqu = (fest[0]-max(fest[0])-par)*norm.cdf(Z)+fest[1]*norm.pdf(Z)
return acqu
def UBC_max(x,model,z_mui=1):
fest = model.predict(x)
acqu = fest[0]+z_mui*np.sqrt(fest[1])
return acqu
###Output
_____no_output_____
###Markdown
We evaluate the functions on our interval of interest. Here, the maximum is found using grid search but in higher dimensional problems and the maximum can be systematically obtained with a Conjugate Gradient method.
###Code
## Evaluate and get the maximum of the acquisition function (grid seach for-plotting purposes)
# MPI
acqu_MPI1 = MPI_max(X_eval,m,0.01)
acqu_MPI2 = MPI_max(X_eval,m,0.1)
acqu_MPI3 = MPI_max(X_eval,m,0.5)
max_MPI1 = X_eval[np.argmax(acqu_MPI1)]
max_MPI2 = X_eval[np.argmax(acqu_MPI2)]
max_MPI3 = X_eval[np.argmax(acqu_MPI3)]
# EI
acqu_EI1 = EI_max(X_eval,m,0.01)
acqu_EI2 = EI_max(X_eval,m,0.1)
acqu_EI3 = EI_max(X_eval,m,0.5)
max_EI1 = X_eval[np.argmax(acqu_EI1)]
max_EI2 = X_eval[np.argmax(acqu_EI2)]
max_EI3 = X_eval[np.argmax(acqu_EI3)]
res_max_EI3 = max_EI3
# UBC
acqu_UBC1 = UBC_max(X_eval,m,0.5)
acqu_UBC2 = UBC_max(X_eval,m,1)
acqu_UBC3 = UBC_max(X_eval,m,4)
max_UBC1 = X_eval[np.argmax(acqu_UBC1)]
max_UBC2 = X_eval[np.argmax(acqu_UBC2)]
max_UBC3 = X_eval[np.argmax(acqu_UBC3)]
res_max_UBC3 = max_UBC3
# Plot GP posterior, collected data and the acquisition function
m.plot()
plt.ylim(-2,3)
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.title('GP model')
plt.savefig('datamodel.pdf')
plt.figure(figsize=(12,4))
plt.subplot(1, 3, 1)
plt.title('Acquisition functions for MPI')
plt.xlim(0.08,0.92)
p1, = plt.plot(X_eval, acqu_MPI1, 'r-',lw=2.5)
p2, = plt.plot(X_eval, acqu_MPI2, 'b-',lw=2.5)
p3, = plt.plot(X_eval, acqu_MPI3, 'g-',lw=2.5)
plt.title('Acquisition functions for MPI')
plt.xlim(0.08,0.92)
plt.xlabel('x')
plt.ylabel('Acquisition value')
plt.legend([p1, p2, p3], ["0.01", "0.1", "0.5"])
plt.axvline(x=max_MPI1,ls='-',c='red')
plt.axvline(x=max_MPI2,ls='-',c='blue')
plt.axvline(x=max_MPI3,ls='-',c='green')
plt.subplot(1, 3, 2)
plt.plot(X_eval, acqu_EI1, 'r-',lw=2.5)
plt.plot(X_eval, acqu_EI2, 'b-',lw=2.5)
plt.plot(X_eval, acqu_EI3, 'g-',lw=2.5)
plt.title('Acquisition functions for EI')
plt.xlim(0.08,0.92)
plt.xlabel('x')
plt.ylabel('Acquisition value')
plt.legend([p1, p2, p3], ["0.01", "0.1", "0.5"])
plt.axvline(x=max_EI1,ls='-',c='red')
plt.axvline(x=max_EI2,ls='-',c='blue')
plt.axvline(x=max_EI3,ls='-',c='green')
plt.subplot(1, 3, 3)
p1, = plt.plot(X_eval, acqu_UBC1, 'r-',lw=2.5)
p2, = plt.plot(X_eval, acqu_UBC2, 'b-',lw=2.5)
p3, = plt.plot(X_eval, acqu_UBC3, 'g-',lw=2.5)
plt.title('Acquisition functions for UBC')
plt.xlim(0.08,0.92)
plt.xlabel('x')
plt.ylabel('Acquisition value')
plt.legend([p1, p2, p3], ["0.5", "1", "4"])
plt.axvline(x=max_UBC1,ls='-',c='red')
plt.axvline(x=max_UBC2,ls='-',c='blue')
plt.axvline(x=max_UBC3,ls='-',c='green')
###Output
_____no_output_____
###Markdown
Next, we show the how the three functions represents our beliefs about the maximum of $f(x)$. Note that all of them use the **mean** and the **variance** of the Gaussian process we have fitted to the data. In this example we simply select some values of the parameters. You can see how the thee acquisition functions represent their beliefs about the maximum of $f(x)$ in a different way. It is up to the user to select the most appropriate depending on the problem. Typically, if we can collect new data we will do it in the maximum of the acquisition function. 3.3 Iterative sampling/sequential design Next, to see how BO works iteratively, we use the Expected improvement with $\psi=0.5$. In each iteration we use the same generative model we considered for our first three data points in the point where $acqu_{EI}(x)$ is maximum. See what happens by running several times the cell below!!
###Code
# 1.- Collect an new sample where the MPI indicates and attach to the previous dataset
x_new = max_EI3
y_new = -np.cos(2*np.pi*x_new) + np.sin(4*np.pi*x_new) + np.random.randn(1,1)*0.1
X = np.vstack([X,x_new])
Y = np.vstack([Y,y_new])
# 2.- Run and optimize the new GP model
k = GPy.kern.RBF(input_dim=1, variance=.1, lengthscale=.1)
m_augmented = GPy.models.GPRegression(X, Y, k)
m_augmented.constrain_positive('')
m_augmented.likelihood.fix(0.01)
m_augmented.optimize_restarts(num_restarts = 10, messages=0)
# 3.- Optimize aquisition function MPI
acqu_EI3 = EI_max(X_eval,m_augmented,0.5)
max_EI3 = X_eval[np.argmax(acqu_EI3)]
res_max_EI3 = np.vstack([res_max_EI3,max_EI3])
x_res = np.linspace(1,res_max_EI3.shape[0],res_max_EI3.shape[0])
# GP plot
plt.rcParams['figure.figsize'] = 10, 3
# GP plot
fest = m_augmented.predict(X_star)
plt.plot(X_star,fest[0],c='blue',lw=2,ls='-',mew=4)
plt.plot(X_star,fest[0]+1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X_star,fest[0]-1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X,Y,'kx',mew=2.5)
plt.title('GP model')
plt.xlabel('x')
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.ylabel('f(x)')
plt.xlim(0,1)
plt.savefig('datamodel7.pdf')
# EI plot
plt.rcParams['figure.figsize'] = 10, 3
plt.figure(figsize=(10,3))
p1, = plt.plot(X_eval,(acqu_EI3-min(acqu_EI3))/(max(acqu_EI3-min(acqu_EI3)) or 1.), 'g-',lw=2.5)
plt.title('Acquisition function')
plt.xlim(0,1.01)
plt.ylim(-0.1,1.1)
plt.xlabel('x')
plt.ylabel('Value')
plt.legend([p1], ["Expected improvement"])
plt.savefig('aq7.pdf')
#print m_augmented
# Convergence plot
#plt.subplot(1, 2, 2)
#plt.plot(x_res,res_max_EI3,'kx',mew=4.5)
#plt.title('Convergence to the maximum')
#plt.xlabel('iteration')
#plt.ylabel('Value')
#plt.ylim(-0.25,1.5)
#plt.plot(x_res,res_max_EI3,'g-',lw=2.5)
#axhline(y=0.6010,ls='--',c='red'#)
# 1.- Collect an new sample where the MPI indicates and attach to the previous dataset
x_new = max_EI3
y_new = -np.cos(2*np.pi*x_new) + np.sin(4*np.pi*x_new) + np.random.randn(1,1)*0.1
X = np.vstack([X,x_new])
Y = np.vstack([Y,y_new])
# 2.- Run and optimize the new GP model
k = GPy.kern.RBF(input_dim=1, variance=.1, lengthscale=.1)
m_augmented = GPy.models.GPRegression(X, Y, k)
m_augmented.constrain_positive('')
m_augmented.likelihood.fix(0.01)
m_augmented.optimize_restarts(num_restarts = 10, messages=0)
# 3.- Optimize aquisition function MPI
acqu_EI3 = EI_max(X_eval,m_augmented,0.5)
max_EI3 = X_eval[np.argmax(acqu_EI3)]
res_max_EI3 = np.vstack([res_max_EI3,max_EI3])
x_res = np.linspace(1,res_max_EI3.shape[0],res_max_EI3.shape[0])
# GP plot
plt.rcParams['figure.figsize'] = 10, 3
# GP plot
fest = m_augmented.predict(X_star)
plt.plot(X_star,fest[0],c='blue',lw=2,ls='-',mew=4)
plt.plot(X_star,fest[0]+1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X_star,fest[0]-1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X,Y,'kx',mew=2.5)
plt.title('GP model')
plt.xlabel('x')
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.ylabel('f(x)')
plt.xlim(0,1)
plt.savefig('datamodel7.pdf')
# EI plot
plt.rcParams['figure.figsize'] = 10, 3
plt.figure(figsize=(10,3))
p1, = plt.plot(X_eval,(acqu_EI3-min(acqu_EI3))/(max(acqu_EI3-min(acqu_EI3)) or 1.), 'g-',lw=2.5)
plt.title('Acquisition function')
plt.xlim(0,1.01)
plt.ylim(-0.1,1.1)
plt.xlabel('x')
plt.ylabel('Value')
plt.legend([p1], ["Expected improvement"])
plt.savefig('aq7.pdf')
#print m_augmented
# Convergence plot
#plt.subplot(1, 2, 2)
#plt.plot(x_res,res_max_EI3,'kx',mew=4.5)
#plt.title('Convergence to the maximum')
#plt.xlabel('iteration')
#plt.ylabel('Value')
#plt.ylim(-0.25,1.5)
#plt.plot(x_res,res_max_EI3,'g-',lw=2.5)
#axhline(y=0.6010,ls='--',c='red'#)
# 1.- Collect an new sample where the MPI indicates and attach to the previous dataset
x_new = max_EI3
y_new = -np.cos(2*np.pi*x_new) + np.sin(4*np.pi*x_new) + np.random.randn(1,1)*0.1
X = np.vstack([X,x_new])
Y = np.vstack([Y,y_new])
# 2.- Run and optimize the new GP model
k = GPy.kern.RBF(input_dim=1, variance=.1, lengthscale=.1)
m_augmented = GPy.models.GPRegression(X, Y, k)
m_augmented.constrain_positive('')
m_augmented.likelihood.fix(0.01)
m_augmented.optimize_restarts(num_restarts = 10, messages=0)
# 3.- Optimize aquisition function MPI
acqu_EI3 = EI_max(X_eval,m_augmented,0.5)
max_EI3 = X_eval[np.argmax(acqu_EI3)]
res_max_EI3 = np.vstack([res_max_EI3,max_EI3])
x_res = np.linspace(1,res_max_EI3.shape[0],res_max_EI3.shape[0])
# GP plot
plt.rcParams['figure.figsize'] = 10, 3
# GP plot
fest = m_augmented.predict(X_star)
plt.plot(X_star,fest[0],c='blue',lw=2,ls='-',mew=4)
plt.plot(X_star,fest[0]+1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X_star,fest[0]-1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X,Y,'kx',mew=2.5)
plt.title('GP model')
plt.xlabel('x')
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.ylabel('f(x)')
plt.xlim(0,1)
plt.savefig('datamodel7.pdf')
# EI plot
plt.rcParams['figure.figsize'] = 10, 3
plt.figure(figsize=(10,3))
p1, = plt.plot(X_eval,(acqu_EI3-min(acqu_EI3))/(max(acqu_EI3-min(acqu_EI3)) or 1.), 'g-',lw=2.5)
plt.title('Acquisition function')
plt.xlim(0,1.01)
plt.ylim(-0.1,1.1)
plt.xlabel('x')
plt.ylabel('Value')
plt.legend([p1], ["Expected improvement"])
plt.savefig('aq7.pdf')
#print m_augmented
# Convergence plot
#plt.subplot(1, 2, 2)
#plt.plot(x_res,res_max_EI3,'kx',mew=4.5)
#plt.title('Convergence to the maximum')
#plt.xlabel('iteration')
#plt.ylabel('Value')
#plt.ylim(-0.25,1.5)
#plt.plot(x_res,res_max_EI3,'g-',lw=2.5)
#axhline(y=0.6010,ls='--',c='red'#)
# 1.- Collect an new sample where the MPI indicates and attach to the previous dataset
x_new = max_EI3
y_new = -np.cos(2*np.pi*x_new) + np.sin(4*np.pi*x_new) + np.random.randn(1,1)*0.1
X = np.vstack([X,x_new])
Y = np.vstack([Y,y_new])
# 2.- Run and optimize the new GP model
k = GPy.kern.RBF(input_dim=1, variance=.1, lengthscale=.1)
m_augmented = GPy.models.GPRegression(X, Y, k)
m_augmented.constrain_positive('')
m_augmented.likelihood.fix(0.01)
m_augmented.optimize_restarts(num_restarts = 10, messages=0)
# 3.- Optimize aquisition function MPI
acqu_EI3 = EI_max(X_eval,m_augmented,0.5)
max_EI3 = X_eval[np.argmax(acqu_EI3)]
res_max_EI3 = np.vstack([res_max_EI3,max_EI3])
x_res = np.linspace(1,res_max_EI3.shape[0],res_max_EI3.shape[0])
# GP plot
plt.rcParams['figure.figsize'] = 10, 3
# GP plot
fest = m_augmented.predict(X_star)
plt.plot(X_star,fest[0],c='blue',lw=2,ls='-',mew=4)
plt.plot(X_star,fest[0]+1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X_star,fest[0]-1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X,Y,'kx',mew=2.5)
plt.title('GP model')
plt.xlabel('x')
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.ylabel('f(x)')
plt.xlim(0,1)
plt.savefig('datamodel7.pdf')
# EI plot
plt.rcParams['figure.figsize'] = 10, 3
plt.figure(figsize=(10,3))
p1, = plt.plot(X_eval,(acqu_EI3-min(acqu_EI3))/(max(acqu_EI3-min(acqu_EI3)) or 1.), 'g-',lw=2.5)
plt.title('Acquisition function')
plt.xlim(0,1.01)
plt.ylim(-0.1,1.1)
plt.xlabel('x')
plt.ylabel('Value')
plt.legend([p1], ["Expected improvement"])
plt.savefig('aq7.pdf')
#print m_augmented
# Convergence plot
#plt.subplot(1, 2, 2)
#plt.plot(x_res,res_max_EI3,'kx',mew=4.5)
#plt.title('Convergence to the maximum')
#plt.xlabel('iteration')
#plt.ylabel('Value')
#plt.ylim(-0.25,1.5)
#plt.plot(x_res,res_max_EI3,'g-',lw=2.5)
#axhline(y=0.6010,ls='--',c='red'#)
# 1.- Collect an new sample where the MPI indicates and attach to the previous dataset
x_new = max_EI3
y_new = -np.cos(2*np.pi*x_new) + np.sin(4*np.pi*x_new) + np.random.randn(1,1)*0.1
X = np.vstack([X,x_new])
Y = np.vstack([Y,y_new])
# 2.- Run and optimize the new GP model
k = GPy.kern.RBF(input_dim=1, variance=.1, lengthscale=.1)
m_augmented = GPy.models.GPRegression(X, Y, k)
m_augmented.constrain_positive('')
m_augmented.likelihood.fix(0.01)
m_augmented.optimize_restarts(num_restarts = 10, messages=0)
# 3.- Optimize aquisition function MPI
acqu_EI3 = EI_max(X_eval,m_augmented,0.5)
max_EI3 = X_eval[np.argmax(acqu_EI3)]
res_max_EI3 = np.vstack([res_max_EI3,max_EI3])
x_res = np.linspace(1,res_max_EI3.shape[0],res_max_EI3.shape[0])
# GP plot
plt.rcParams['figure.figsize'] = 10, 3
# GP plot
fest = m_augmented.predict(X_star)
plt.plot(X_star,fest[0],c='blue',lw=2,ls='-',mew=4)
plt.plot(X_star,fest[0]+1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X_star,fest[0]-1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X,Y,'kx',mew=2.5)
plt.title('GP model')
plt.xlabel('x')
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.ylabel('f(x)')
plt.xlim(0,1)
plt.savefig('datamodel7.pdf')
# EI plot
plt.rcParams['figure.figsize'] = 10, 3
plt.figure(figsize=(10,3))
p1, = plt.plot(X_eval,(acqu_EI3-min(acqu_EI3))/(max(acqu_EI3-min(acqu_EI3)) or 1.), 'g-',lw=2.5)
plt.title('Acquisition function')
plt.xlim(0,1.01)
plt.ylim(-0.1,1.1)
plt.xlabel('x')
plt.ylabel('Value')
plt.legend([p1], ["Expected improvement"])
plt.savefig('aq7.pdf')
#print m_augmented
# Convergence plot
#plt.subplot(1, 2, 2)
#plt.plot(x_res,res_max_EI3,'kx',mew=4.5)
#plt.title('Convergence to the maximum')
#plt.xlabel('iteration')
#plt.ylabel('Value')
#plt.ylim(-0.25,1.5)
#plt.plot(x_res,res_max_EI3,'g-',lw=2.5)
#axhline(y=0.6010,ls='--',c='red'#)
# 1.- Collect an new sample where the MPI indicates and attach to the previous dataset
x_new = max_EI3
y_new = -np.cos(2*np.pi*x_new) + np.sin(4*np.pi*x_new) + np.random.randn(1,1)*0.1
X = np.vstack([X,x_new])
Y = np.vstack([Y,y_new])
# 2.- Run and optimize the new GP model
k = GPy.kern.RBF(input_dim=1, variance=.1, lengthscale=.1)
m_augmented = GPy.models.GPRegression(X, Y, k)
m_augmented.constrain_positive('')
m_augmented.likelihood.fix(0.01)
m_augmented.optimize_restarts(num_restarts = 10, messages=0)
# 3.- Optimize aquisition function MPI
acqu_EI3 = EI_max(X_eval,m_augmented,0.5)
max_EI3 = X_eval[np.argmax(acqu_EI3)]
res_max_EI3 = np.vstack([res_max_EI3,max_EI3])
x_res = np.linspace(1,res_max_EI3.shape[0],res_max_EI3.shape[0])
# GP plot
plt.rcParams['figure.figsize'] = 10, 3
# GP plot
fest = m_augmented.predict(X_star)
plt.plot(X_star,fest[0],c='blue',lw=2,ls='-',mew=4)
plt.plot(X_star,fest[0]+1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X_star,fest[0]-1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X,Y,'kx',mew=2.5)
plt.title('GP model')
plt.xlabel('x')
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.ylabel('f(x)')
plt.xlim(0,1)
plt.savefig('datamodel7.pdf')
# EI plot
plt.rcParams['figure.figsize'] = 10, 3
plt.figure(figsize=(10,3))
p1, = plt.plot(X_eval,(acqu_EI3-min(acqu_EI3))/(max(acqu_EI3-min(acqu_EI3)) or 1.), 'g-',lw=2.5)
plt.title('Acquisition function')
plt.xlim(0,1.01)
plt.ylim(-0.1,1.1)
plt.xlabel('x')
plt.ylabel('Value')
plt.legend([p1], ["Expected improvement"])
plt.savefig('aq7.pdf')
#print m_augmented
# Convergence plot
#plt.subplot(1, 2, 2)
#plt.plot(x_res,res_max_EI3,'kx',mew=4.5)
#plt.title('Convergence to the maximum')
#plt.xlabel('iteration')
#plt.ylabel('Value')
#plt.ylim(-0.25,1.5)
#plt.plot(x_res,res_max_EI3,'g-',lw=2.5)
#axhline(y=0.6010,ls='--',c='red'#)
# 1.- Collect an new sample where the MPI indicates and attach to the previous dataset
x_new = max_EI3
y_new = -np.cos(2*np.pi*x_new) + np.sin(4*np.pi*x_new) + np.random.randn(1,1)*0.1
X = np.vstack([X,x_new])
Y = np.vstack([Y,y_new])
# 2.- Run and optimize the new GP model
k = GPy.kern.RBF(input_dim=1, variance=.1, lengthscale=.1)
m_augmented = GPy.models.GPRegression(X, Y, k)
m_augmented.constrain_positive('')
m_augmented.likelihood.fix(0.01)
m_augmented.optimize_restarts(num_restarts = 10, messages=0)
# 3.- Optimize aquisition function MPI
acqu_EI3 = EI_max(X_eval,m_augmented,0.5)
max_EI3 = X_eval[np.argmax(acqu_EI3)]
res_max_EI3 = np.vstack([res_max_EI3,max_EI3])
x_res = np.linspace(1,res_max_EI3.shape[0],res_max_EI3.shape[0])
# GP plot
plt.rcParams['figure.figsize'] = 10, 3
# GP plot
fest = m_augmented.predict(X_star)
plt.plot(X_star,fest[0],c='blue',lw=2,ls='-',mew=4)
plt.plot(X_star,fest[0]+1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X_star,fest[0]-1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X,Y,'kx',mew=2.5)
plt.title('GP model')
plt.xlabel('x')
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.ylabel('f(x)')
plt.xlim(0,1)
plt.savefig('datamodel7.pdf')
# EI plot
plt.rcParams['figure.figsize'] = 10, 3
plt.figure(figsize=(10,3))
p1, = plt.plot(X_eval,(acqu_EI3-min(acqu_EI3))/(max(acqu_EI3-min(acqu_EI3)) or 1.), 'g-',lw=2.5)
plt.title('Acquisition function')
plt.xlim(0,1.01)
plt.ylim(-0.1,1.1)
plt.xlabel('x')
plt.ylabel('Value')
plt.legend([p1], ["Expected improvement"])
plt.savefig('aq7.pdf')
#print m_augmented
# Convergence plot
#plt.subplot(1, 2, 2)
#plt.plot(x_res,res_max_EI3,'kx',mew=4.5)
#plt.title('Convergence to the maximum')
#plt.xlabel('iteration')
#plt.ylabel('Value')
#plt.ylim(-0.25,1.5)
#plt.plot(x_res,res_max_EI3,'g-',lw=2.5)
#axhline(y=0.6010,ls='--',c='red'#)
# 1.- Collect an new sample where the MPI indicates and attach to the previous dataset
x_new = max_EI3
y_new = -np.cos(2*np.pi*x_new) + np.sin(4*np.pi*x_new) + np.random.randn(1,1)*0.1
X = np.vstack([X,x_new])
Y = np.vstack([Y,y_new])
# 2.- Run and optimize the new GP model
k = GPy.kern.RBF(input_dim=1, variance=.1, lengthscale=.1)
m_augmented = GPy.models.GPRegression(X, Y, k)
m_augmented.constrain_positive('')
m_augmented.likelihood.fix(0.01)
m_augmented.optimize_restarts(num_restarts = 10, messages=0)
# 3.- Optimize aquisition function MPI
acqu_EI3 = EI_max(X_eval,m_augmented,0.5)
max_EI3 = X_eval[np.argmax(acqu_EI3)]
res_max_EI3 = np.vstack([res_max_EI3,max_EI3])
x_res = np.linspace(1,res_max_EI3.shape[0],res_max_EI3.shape[0])
# GP plot
plt.rcParams['figure.figsize'] = 10, 3
# GP plot
fest = m_augmented.predict(X_star)
plt.plot(X_star,fest[0],c='blue',lw=2,ls='-',mew=4)
plt.plot(X_star,fest[0]+1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X_star,fest[0]-1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X,Y,'kx',mew=2.5)
plt.title('GP model')
plt.xlabel('x')
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.ylabel('f(x)')
plt.xlim(0,1)
plt.savefig('datamodel7.pdf')
# EI plot
plt.rcParams['figure.figsize'] = 10, 3
plt.figure(figsize=(10,3))
p1, = plt.plot(X_eval,(acqu_EI3-min(acqu_EI3))/(max(acqu_EI3-min(acqu_EI3)) or 1.), 'g-',lw=2.5)
plt.title('Acquisition function')
plt.xlim(0,1.01)
plt.ylim(-0.1,1.1)
plt.xlabel('x')
plt.ylabel('Value')
plt.legend([p1], ["Expected improvement"])
plt.savefig('aq7.pdf')
#print m_augmented
# Convergence plot
#plt.subplot(1, 2, 2)
#plt.plot(x_res,res_max_EI3,'kx',mew=4.5)
#plt.title('Convergence to the maximum')
#plt.xlabel('iteration')
#plt.ylabel('Value')
#plt.ylim(-0.25,1.5)
#plt.plot(x_res,res_max_EI3,'g-',lw=2.5)
#axhline(y=0.6010,ls='--',c='red'#)
# 1.- Collect an new sample where the MPI indicates and attach to the previous dataset
x_new = max_EI3
y_new = -np.cos(2*np.pi*x_new) + np.sin(4*np.pi*x_new) + np.random.randn(1,1)*0.1
X = np.vstack([X,x_new])
Y = np.vstack([Y,y_new])
# 2.- Run and optimize the new GP model
k = GPy.kern.RBF(input_dim=1, variance=.1, lengthscale=.1)
m_augmented = GPy.models.GPRegression(X, Y, k)
m_augmented.constrain_positive('')
m_augmented.likelihood.fix(0.01)
m_augmented.optimize_restarts(num_restarts = 10, messages=0)
# 3.- Optimize aquisition function MPI
acqu_EI3 = EI_max(X_eval,m_augmented,0.5)
max_EI3 = X_eval[np.argmax(acqu_EI3)]
res_max_EI3 = np.vstack([res_max_EI3,max_EI3])
x_res = np.linspace(1,res_max_EI3.shape[0],res_max_EI3.shape[0])
# GP plot
plt.rcParams['figure.figsize'] = 10, 3
# GP plot
fest = m_augmented.predict(X_star)
plt.plot(X_star,fest[0],c='blue',lw=2,ls='-',mew=4)
plt.plot(X_star,fest[0]+1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X_star,fest[0]-1.96*np.sqrt(fest[1]),c='blue',lw=1,ls='-',mew=1)
plt.plot(X,Y,'kx',mew=2.5)
plt.title('GP model')
plt.xlabel('x')
plt.plot(X_star,Y_star,c='grey',lw=2,ls='--',mew=1.5)
plt.ylabel('f(x)')
plt.xlim(0,1)
plt.savefig('datamodel7.pdf')
# EI plot
plt.rcParams['figure.figsize'] = 10, 3
plt.figure(figsize=(10,3))
p1, = plt.plot(X_eval,(acqu_EI3-min(acqu_EI3))/(max(acqu_EI3-min(acqu_EI3)) or 1.), 'g-',lw=2.5)
plt.title('Acquisition function')
plt.xlim(0,1.01)
plt.ylim(-0.1,1.1)
plt.xlabel('x')
plt.ylabel('Value')
plt.legend([p1], ["Expected improvement"])
plt.savefig('aq7.pdf')
#print m_augmented
# Convergence plot
#plt.subplot(1, 2, 2)
#plt.plot(x_res,res_max_EI3,'kx',mew=4.5)
#plt.title('Convergence to the maximum')
#plt.xlabel('iteration')
#plt.ylabel('Value')
#plt.ylim(-0.25,1.5)
#plt.plot(x_res,res_max_EI3,'g-',lw=2.5)
#axhline(y=0.6010,ls='--',c='red'#)
###Output
Optimization restart 1/10, f = 11.5400414078
Optimization restart 2/10, f = 11.5400414078
Optimization restart 3/10, f = 11.5400414078
Optimization restart 4/10, f = 11.5400414078
Optimization restart 5/10, f = 11.5400414078
Optimization restart 6/10, f = 11.5400414078
Optimization restart 7/10, f = 11.5400414079
Optimization restart 8/10, f = 11.5400414078
Optimization restart 9/10, f = 11.5400414078
Optimization restart 10/10, f = 11.5400414078
|
data-manipulation-exercises/Manipulacao_de_Dados_Ex_01.ipynb | ###Markdown
 Exercícios de manipulação de dados - Parte 1 Neste Jupyter notebook você irá resolver uma exercícios utilizando a biblioteca Pandas.\Todos os datasets utilizados nos exercícios estão salvos na pasta *datasets*.\Todo o seu código deve ser executado neste Jupyter Notebook. Por fim, se desejar, revise as respostas com o seu mentor.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Passo 1. Importando os dadosCarregue os dados salvos no arquivo ***datasets/users_dataset.csv***.\Esse arquivo possui um conjunto de dados de trabalhadores com 5 colunas separadas pelo símbolo "|" (pipe) e 943 linhas.Dica: não se esqueça do argumento *sep* quando importar os dados.
###Code
df = pd.read_csv('users_dataset.csv', sep='|')
###Output
_____no_output_____
###Markdown
Passo 2. Mostre as 15 primeiras linhas do dataset.*Dica: utilize a função head do DataFrame*
###Code
df.head(15)
###Output
_____no_output_____
###Markdown
Passo 3. Mostre as 10 últimas linhas do dataset
###Code
df.tail(10)
###Output
_____no_output_____
###Markdown
Passo 4. Qual o número de linhas e colunas do DataFrame?
###Code
df.shape
###Output
_____no_output_____
###Markdown
Passo 5. Mostre o nome de todas as colunas.
###Code
df.columns
###Output
_____no_output_____
###Markdown
Passo 6. Qual o tipo de dado de cada columa?
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
Passo 7. Mostre os dados da coluna *occupation*.
###Code
df['occupation']
###Output
_____no_output_____
###Markdown
Passo 8. Quantas ocupações diferentes existem neste dataset?
###Code
df.value_counts('occupation')
###Output
_____no_output_____
###Markdown
Passo 9. Qual a ocupação mais frequente?
###Code
df['occupation'].value_counts().idxmax()
###Output
_____no_output_____
###Markdown
Passo 10. Qual a idade média dos usuários?
###Code
round(df['age'].mean(), 2)
###Output
_____no_output_____
###Markdown
Passo 11. Utilize o método describe() para obter diversas informações a respeito do DataFrame.
###Code
df.describe(include = 'all')
###Output
_____no_output_____ |
notebooks/data_viz_to_coder/raw/ex6.ipynb | ###Markdown
In this exercise, you'll explore different chart styles, to see which color combinations and fonts you like best! SetupRun the next cell to import and configure the Python libraries that you need to complete the exercise.
###Code
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
###Output
_____no_output_____
###Markdown
The questions below will give you feedback on your work. Run the following cell to set up our feedback system.
###Code
# Set up code checking
import os
if not os.path.exists("../input/spotify.csv"):
os.symlink("../input/data-for-datavis/spotify.csv", "../input/spotify.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex6 import *
print("Setup Complete")
###Output
_____no_output_____
###Markdown
You'll work with a chart from the previous tutorial. Run the next cell to load the data.
###Code
# Path of the file to read
spotify_filepath = "../input/spotify.csv"
# Read the file into a variable spotify_data
spotify_data = pd.read_csv(spotify_filepath, index_col="Date", parse_dates=True)
###Output
_____no_output_____
###Markdown
Try out seaborn stylesRun the command below to try out the `"dark"` theme.
###Code
# Change the style of the figure
sns.set_style("dark")
# Line chart
plt.figure(figsize=(12,6))
sns.lineplot(data=spotify_data)
# Mark the exercise complete after the code cell is run
step_1.check()
###Output
_____no_output_____
###Markdown
In this exercise, you'll explore different chart styles, to see which color combinations and fonts you like best! SetupRun the next cell to import and configure the Python libraries that you need to complete the exercise.
###Code
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
###Output
_____no_output_____
###Markdown
The questions below will give you feedback on your work. Run the following cell to set up our feedback system.
###Code
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex6 import *
print("Setup Complete")
###Output
_____no_output_____
###Markdown
You'll work with a chart from the previous tutorial. Run the next cell to load the data.
###Code
# Path of the file to read
spotify_filepath = "../input/spotify.csv"
# Read the file into a variable spotify_data
spotify_data = pd.read_csv(spotify_filepath, index_col="Date", parse_dates=True)
###Output
_____no_output_____
###Markdown
Try out seaborn stylesRun the command below to try out the `"dark"` theme.
###Code
# Change the style of the figure
sns.set_style("dark")
# Line chart
plt.figure(figsize=(12,6))
sns.lineplot(data=spotify_data)
# Mark the exercise complete after the code cell is run
step_1.check()
###Output
_____no_output_____ |
Harris Corner Detection.ipynb | ###Markdown
Harris Corner Detection, Chessboard Import resources and display image
###Code
import matplotlib.pyplot as plt
import numpy as np
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/skewed_chessboard.png')
# Make a copy of the image
image_copy = np.copy(image)
# Change color to RGB (from BGR)
image_copy = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB)
plt.imshow(image_copy)
###Output
_____no_output_____
###Markdown
Detect corners
###Code
# Convert to grayscale
gray = cv2.cvtColor(image_copy, cv2.COLOR_RGB2GRAY)
gray = np.float32(gray)
# Detect corners
dst = cv2.cornerHarris(gray, 2, 3, 0.04)
# Dilate corner image to enhance corner points
dst = cv2.dilate(dst,None)
plt.imshow(dst, cmap='gray')
###Output
_____no_output_____
###Markdown
Extract and display strong corners
###Code
# Define a threshold for extracting strong corners
# This value may vary depending on the image
thresh = 0.1*dst.max()
# Create an image copy to draw corners on
corner_image = np.copy(image_copy)
# Iterate through all the corners and draw them on the image (if they pass the threshold)
for j in range(0, dst.shape[0]):
for i in range(0, dst.shape[1]):
if(dst[j,i] > thresh):
# image, center pt, radius, color, thickness
cv2.circle( corner_image, (i, j), 2, (0,255,0), 1)
plt.imshow(corner_image)
###Output
_____no_output_____
###Markdown
Harris Corner Detection Import resources and display image
###Code
import matplotlib.pyplot as plt
import numpy as np
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/waffle.jpg')
# Make a copy of the image
image_copy = np.copy(image)
# Change color to RGB (from BGR)
image_copy = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB)
plt.imshow(image_copy)
###Output
_____no_output_____
###Markdown
Detect corners
###Code
# Convert to grayscale
gray = cv2.cvtColor(image_copy, cv2.COLOR_RGB2GRAY)
gray = np.float32(gray)
# Detect corners
dst = cv2.cornerHarris(gray, 2, 3, 0.04)
# Dilate corner image to enhance corner points
dst = cv2.dilate(dst,None)
plt.imshow(dst, cmap='gray')
###Output
_____no_output_____
###Markdown
Extract and display strong corners
###Code
## TODO: Define a threshold for extracting strong corners
# This value vary depending on the image and how many corners you want to detect
# Try changing this free parameter, 0.1, to be larger or smaller ans see what happens
thresh = 0.1*dst.max()
# Create an image copy to draw corners on
corner_image = np.copy(image_copy)
# Iterate through all the corners and draw them on the image (if they pass the threshold)
for j in range(0, dst.shape[0]):
for i in range(0, dst.shape[1]):
if(dst[j,i] > thresh):
# image, center pt, radius, color, thickness
cv2.circle( corner_image, (i, j), 1, (0,255,0), 1)
plt.imshow(corner_image)
###Output
_____no_output_____
###Markdown
Harris Corner Detection Import resources and display image
###Code
import matplotlib.pyplot as plt
import numpy as np
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/waffle.jpg')
# Make a copy of the image
image_copy = np.copy(image)
# Change color to RGB (from BGR)
image_copy = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB)
plt.imshow(image_copy)
###Output
_____no_output_____
###Markdown
Detect corners
###Code
# Convert to grayscale
gray = cv2.cvtColor(image_copy, cv2.COLOR_RGB2GRAY)
gray = np.float32(gray)
# Detect corners
dst = cv2.cornerHarris(gray, 2, 3, 0.04)
# Dilate corner image to enhance corner points
dst = cv2.dilate(dst,None)
plt.imshow(dst, cmap='gray')
###Output
_____no_output_____
###Markdown
Extract and display strong corners
###Code
## TODO: Define a threshold for extracting strong corners
# This value vary depending on the image and how many corners you want to detect
# Try changing this free parameter, 0.1, to be larger or smaller ans see what happens
thresh = 0.1*dst.max()
# Create an image copy to draw corners on
corner_image = np.copy(image_copy)
# Iterate through all the corners and draw them on the image (if they pass the threshold)
for j in range(0, dst.shape[0]):
for i in range(0, dst.shape[1]):
if(dst[j,i] > thresh):
# image, center pt, radius, color, thickness
cv2.circle( corner_image, (i, j), 1, (0,255,0), 1)
plt.imshow(corner_image)
###Output
_____no_output_____ |
tutorials/Image/07_morphological_operations.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Morphological OperationsEarth Engine implements morphological operations as focal operations, specifically `focal_max()`, `focal_min()`, `focal_median()`, and `focal_mode()` instance methods in the `Image` class. (These are shortcuts for the more general `reduceNeighborhood()`, which can input the pixels in a kernel to any reducer with a numeric output. See [this page](https://developers.google.com/earth-engine/reducers_reduce_neighborhood) for more information on reducing neighborhoods). The morphological operators are useful for performing operations such as erosion, dilation, opening and closing. For example, to perform an [opening operation](http://en.wikipedia.org/wiki/Opening_(morphology)), use `focal_min()` followed by `focal_max()`: Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.foliumap`](https://github.com/giswqs/geemap/blob/master/geemap/foliumap.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.foliumap as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40, -100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318').select(4).gt(0.2)
Map.setCenter(-122.1899, 37.5010, 13)
Map.addLayer(image, {}, 'NIR threshold')
# Define a kernel.
kernel = ee.Kernel.circle(**{'radius': 1})
# Perform an erosion followed by a dilation, display.
opened = image.focal_min(**{'kernel': kernel, 'iterations': 2}).focal_max(
**{'kernel': kernel, 'iterations': 2}
)
Map.addLayer(opened, {}, 'opened')
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Morphological OperationsEarth Engine implements morphological operations as focal operations, specifically `focal_max()`, `focal_min()`, `focal_median()`, and `focal_mode()` instance methods in the `Image` class. (These are shortcuts for the more general `reduceNeighborhood()`, which can input the pixels in a kernel to any reducer with a numeric output. See [this page](https://developers.google.com/earth-engine/reducers_reduce_neighborhood) for more information on reducing neighborhoods). The morphological operators are useful for performing operations such as erosion, dilation, opening and closing. For example, to perform an [opening operation](http://en.wikipedia.org/wiki/Opening_(morphology)), use `focal_min()` followed by `focal_max()`: Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318') \
.select(4).gt(0.2)
Map.setCenter(-122.1899, 37.5010, 13)
Map.addLayer(image, {}, 'NIR threshold')
# Define a kernel.
kernel = ee.Kernel.circle(**{'radius': 1})
# Perform an erosion followed by a dilation, display.
opened = image \
.focal_min(**{'kernel': kernel, 'iterations': 2}) \
.focal_max(**{'kernel': kernel, 'iterations': 2})
Map.addLayer(opened, {}, 'opened')
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Morphological OperationsEarth Engine implements morphological operations as focal operations, specifically `focal_max()`, `focal_min()`, `focal_median()`, and `focal_mode()` instance methods in the `Image` class. (These are shortcuts for the more general `reduceNeighborhood()`, which can input the pixels in a kernel to any reducer with a numeric output. See [this page](https://developers.google.com/earth-engine/reducers_reduce_neighborhood) for more information on reducing neighborhoods). The morphological operators are useful for performing operations such as erosion, dilation, opening and closing. For example, to perform an [opening operation](http://en.wikipedia.org/wiki/Opening_(morphology)), use `focal_min()` followed by `focal_max()`: Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318') \
.select(4).gt(0.2)
Map.setCenter(-122.1899, 37.5010, 13)
Map.addLayer(image, {}, 'NIR threshold')
# Define a kernel.
kernel = ee.Kernel.circle(**{'radius': 1})
# Perform an erosion followed by a dilation, display.
opened = image \
.focal_min(**{'kernel': kernel, 'iterations': 2}) \
.focal_max(**{'kernel': kernel, 'iterations': 2})
Map.addLayer(opened, {}, 'opened')
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Morphological OperationsEarth Engine implements morphological operations as focal operations, specifically `focal_max()`, `focal_min()`, `focal_median()`, and `focal_mode()` instance methods in the `Image` class. (These are shortcuts for the more general `reduceNeighborhood()`, which can input the pixels in a kernel to any reducer with a numeric output. See [this page](https://developers.google.com/earth-engine/reducers_reduce_neighborhood) for more information on reducing neighborhoods). The morphological operators are useful for performing operations such as erosion, dilation, opening and closing. For example, to perform an [opening operation](http://en.wikipedia.org/wiki/Opening_(morphology)), use `focal_min()` followed by `focal_max()`: Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.foliumap`](https://github.com/giswqs/geemap/blob/master/geemap/foliumap.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.foliumap as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318') \
.select(4).gt(0.2)
Map.setCenter(-122.1899, 37.5010, 13)
Map.addLayer(image, {}, 'NIR threshold')
# Define a kernel.
kernel = ee.Kernel.circle(**{'radius': 1})
# Perform an erosion followed by a dilation, display.
opened = image \
.focal_min(**{'kernel': kernel, 'iterations': 2}) \
.focal_max(**{'kernel': kernel, 'iterations': 2})
Map.addLayer(opened, {}, 'opened')
Map.addLayerControl()
Map
###Output
_____no_output_____ |
notebooks/tpu_colab_tutorial.ipynb | ###Markdown
TPUs in Colab**Authors*** Gerardo Durán-Martín* Mahmoud Soliman Before start this tutorial, make sure to configure your session correctly. 1. First we authenticate GCP to our current session
###Code
from google.colab import auth
auth.authenticate_user()
###Output
_____no_output_____
###Markdown
2. Next, we install GCloud SDK
###Code
!curl -S https://sdk.cloud.google.com | bash
###Output
_____no_output_____
###Markdown
3. Finally, we initialise all the variables we will be using throughout this tutorial.We will create a `.sh` file that must be called at every cell that begins with `%%bash` as follows:```bash%%bashsource /content/commands.sh ... rest of the commands```
###Code
%%writefile commands.sh
gcloud="/root/google-cloud-sdk/bin/gcloud"
gtpu="gcloud alpha compute tpus tpu-vm"
instance_name="probml-01-gerdm" # Modify for your instance name
tpu_zone="us-east1-d"
jax_install="pip install 'jax[tpu]>=0.2.16' -f https://storage.googleapis.com/jax-releases/libtpu_releases.html"
###Output
Overwriting commands.sh
###Markdown
gcloudThis first section introduces the gloud command line. We can work in the cloud in one of two ways:1. Using the command line (this tutorial)2. Using the google cloud console ([console.cloud.google.com](https://console.cloud.google.com/)) SetupOur first step is to install `gcloud alpha`.- Installing `gcloud alpha` We begin by installing the `gcloud alpha` command line. This will allow us to work with TPUs at Google cloud. Run the following command
###Code
%%bash
source /content/commands.sh
$gcloud components install alpha
###Output
All components are up to date.
###Markdown
Next, we set the project to `probml`
###Code
%%bash
source /content/commands.sh
$gcloud config set project probml
###Output
_____no_output_____
###Markdown
- Verify installationFinally, we verify that you've successfully installed `gcloud alpha` by running the following command. Make sure to have version `alpha 2021.06.25` or later.
###Code
%%bash
source /content/commands.sh
$gcloud -v
###Output
Google Cloud SDK 351.0.0
alpha 2021.07.30
bq 2.0.70
core 2021.07.30
gsutil 4.66
###Markdown
TPUS The basics Creating an instanceEach GSoC member obtains 8 v3-32 cores (or a Slice) when following the instructions outlined below.To create our first TPU instance, we run the following command. Note that `instance_name` should be unique (it was defined at the top of this tutorial)
###Code
%%bash
source /content/commands.sh
$gtpu create $instance_name \
--accelerator-type v3-32 \
--version v2-alpha \
--zone $tpu_zone
###Output
Create request issued for: [probml-01-gerdm]
Waiting for operation [projects/probml/locations/us-east1-d/operations/operation-1628065808121-5c8b79c2a006b-a528f872-851a3d0d] to complete...
.......................................................................................................................................................................................................................................................................................................................................................................................done.
Created tpu [probml-01-gerdm].
###Markdown
You can verify whether your instance has been created by running the following cell
###Code
%%bash
source /content/commands.sh
$gcloud alpha compute tpus list --zone $tpu_zone
###Output
NAME ZONE ACCELERATOR_TYPE NETWORK RANGE STATUS API_VERSION
probml-01-gerdm us-east1-d v3-32 default 10.142.0.0/20 READY V2_ALPHA1
murphyk-tpu us-east1-d v3-32 default 10.142.0.0/20 READY V2_ALPHA1
probml-05-srikar us-east1-d v3-32 default 10.142.0.0/20 READY V2_ALPHA1
probml-00-mjsml us-east1-d v3-32 default 10.142.0.0/20 READY V2_ALPHA1
###Markdown
Deleting an instanceTo avoid extra costs, it is important to delete the instance after use (training, testing experimenting, etc.).To delete an instance, we create and run a cell with the following content```bash%%bashsource /content/commands.sh$gtpu delete --quiet $instance_name --zone=$tpu_zone```**Make sure to delete your instance once you finish!!** Jax Installing JaxWhen connecting to an instance directly via ssh, it is important to note that running any Jax command will wait for the other hosts to be active. To void this, we have to run the desired code simultaneously on all the hosts.> To run JAX code on a TPU Pod slice, you must run the code **on each host in the TPU Pod slice.**In the next cell, we install Jax on each host of our slice.
###Code
%%bash
source /content/commands.sh
$gtpu ssh $instance_name \
--zone $tpu_zone \
--command "$jax_install" \
--worker all # or machine instance 1..3
###Output
_____no_output_____
###Markdown
Example 1: Hello, TPUs!In this example, we create a `hello_tpu.sh` that asserts whether we can connect to all of the hosts. First, we create the `.sh` file that will be run **in each of the workers**.
###Code
%%writefile hello_tpu.sh
#!/bin/bash
# file: hello_tpu.sh
export gist_url="https://gist.github.com/1e8d226e7a744d22d010ca4980456c3a.git"
git clone $gist_url hello_gsoc
python3 hello_gsoc/hello_tpu.py
###Output
Writing hello_tpu.sh
###Markdown
The content of `$gist_url` is the followingYou do not need to store the following file. Our script `hello_tpu.sh` will download the file to each of the hosts and run it.```python Taken from https://cloud.google.com/tpu/docs/jax-pods To be used by the Pyprobml GSoC 2021 team The following code snippet will be run on all TPU hostsimport jax The total number of TPU cores in the poddevice_count = jax.device_count() The number of TPU cores attached to this hostlocal_device_count = jax.local_device_count() The psum is performed over all mapped devices across the podxs = jax.numpy.ones(jax.local_device_count())r = jax.pmap(lambda x: jax.lax.psum(x, 'i'), axis_name='i')(xs) Print from a single host to avoid duplicated outputif jax.process_index() == 0: print('global device count:', jax.device_count()) print('local device count:', jax.local_device_count()) print('pmap result:', r)%```Next, we run the code across all workers
###Code
%%bash
source /content/commands.sh
$gtpu ssh $instance_name \
--zone $tpu_zone \
--command "$(<./hello_tpu.sh)" \
--worker all
###Output
global device count: 32
local device count: 8
pmap result: [32. 32. 32. 32. 32. 32. 32. 32.]
###Markdown
Example 2: 🚧K-nearest neighbours🚧In this example we train the MNIST dataset using the KNN algorithm `pmap`. Our program clones a Github gist into each of the hosts. We use the multi-device availability of our slice to delegate a part of the training to each of the workers.First, we create the script that will be run on each of the workers
###Code
%%writefile knn_tpu.sh
#!/bin/bash
# file: knn_tpu.sh
export gist_url="https://gist.github.com/716a7bfd4c5c0c0e1949072f7b2e03a6.git"
pip3 install -q tensorflow_datasets
git clone $gist_url demo
python3 demo/knn_tpu.py
###Output
Writing knn_tpu.sh
###Markdown
Next, we run the script
###Code
%%bash
source /content/commands.sh
$gtpu ssh $instance_name \
--zone $tpu_zone \
--command "$(<./knn_tpu.sh)" \
--worker all
###Output
(8, 10, 20)
class_rate=0.9125
###Markdown
Running JAX on Cloud TPU VMs from Colab**Authors*** Gerardo Durán-Martín* Mahmoud Soliman* Kevin Murphy Define some global variables We create a `commands.sh` file that defines some macros.**Edit the values in this file to match your credentials**.This file must be called in every cell below that begins with `%%bash`
###Code
--command="pip install 'jax[tpu]>=0.2.16' -f https://storage.googleapis.com/jax-releases/libtpu_releases.html"
%%writefile commands.sh
gcloud="/root/google-cloud-sdk/bin/gcloud"
gtpu="gcloud alpha compute tpus tpu-vm"
jax_install="pip install 'jax[tpu]>=0.2.16' -f https://storage.googleapis.com/jax-releases/libtpu_releases.html"
# edit lines below
#instance_name="murphyk-v3-8"
#tpu_zone="us-central1-a"
#accelerator_type="v3-8"
instance_name="murphyk-tpu"
tpu_zone="us-east1-d"
accelerator_type="v3-32"
###Output
_____no_output_____
###Markdown
Setup GCP First we authenticate GCP to our current session
###Code
from google.colab import auth
auth.authenticate_user()
###Output
_____no_output_____
###Markdown
Next, we install GCloud SDK
###Code
%%capture
!curl -S https://sdk.cloud.google.com | bash
###Output
_____no_output_____
###Markdown
Now we install the gcloud command line interface This will allow us to work with TPUs at Google cloud. Run the following command
###Code
%%bash
source /content/commands.sh
$gcloud components install alpha
###Output
All components are up to date.
###Markdown
Next, we set the project to `probml`
###Code
%%bash
source /content/commands.sh
$gcloud config set project probml
###Output
Updated property [core/project].
###Markdown
- Verify installationFinally, we verify that you've successfully installed `gcloud alpha` by running the following command. Make sure to have version `alpha 2021.06.25` or later.
###Code
%%bash
source /content/commands.sh
$gcloud -v
###Output
Google Cloud SDK 358.0.0
alpha 2021.09.17
bq 2.0.71
core 2021.09.17
gsutil 4.68
###Markdown
Setup TPUs Creating an instanceEach GSoC member obtains 8 v3-32 cores (or a Slice) when following the instructions outlined below.To create our first TPU instance, we run the following command. Note that `instance_name` should be unique (it was defined at the top of this tutorial)
###Code
%%bash
source /content/commands.sh
$gtpu create $instance_name \
--accelerator-type $accelerator_type \
--version v2-alpha \
--zone $tpu_zone
###Output
ERROR: (gcloud.alpha.compute.tpus.tpu-vm.create) INVALID_ARGUMENT: Cloud TPU received a bad request. the accelerator v3-8 was not found in zone us-east1-d [EID: 0x72c898a0fe1c2eef]
###Markdown
You can verify whether your instance has been created by running the following cell
###Code
%%bash
source /content/commands.sh
$gcloud alpha compute tpus list --zone $tpu_zone
###Output
NAME ZONE ACCELERATOR_TYPE NETWORK RANGE STATUS API_VERSION
murphyk-tpu us-east1-d v3-32 default 10.142.0.0/20 READY V2_ALPHA1
mjsml-tpu us-east1-d v3-32 default 10.142.0.0/20 READY V2_ALPHA1
mjsml-tpu2 us-east1-d v3-128 default 10.142.0.0/20 READY V2_ALPHA1
###Markdown
Deleting an instanceTo avoid extra costs, it is important to delete the instance after use (training, testing experimenting, etc.).To delete an instance, we create and run a cell with the following content```bash%%bashsource /content/commands.sh$gtpu delete --quiet $instance_name --zone=$tpu_zone```**Make sure to delete your instance once you finish!!** Setup JAX When connecting to an instance directly via ssh, it is important to note that running any Jax command will wait for the other hosts to be active. To avoid this, we have to run the desired code simultaneously on all the hosts.Thus To run JAX code on a TPU Pod slice, you must run the code **on each host in the TPU Pod slice.**In the next cell, we install Jax on each host of our slice.
###Code
%%bash
source /content/commands.sh
$gtpu ssh $instance_name \
--zone $tpu_zone \
--command "$jax_install" \
--worker all # or machine instance 1..3
###Output
_____no_output_____
###Markdown
JAX examples Example 0https://cloud.google.com/tpu/docs/jax-pods Example 1: Hello, TPUs!In this example, we create a `hello_tpu.sh` that asserts whether we can connect to all of the hosts. First, we create the `.sh` file that will be run **in each of the workers**.
###Code
%%writefile hello_tpu.sh
#!/bin/bash
# file: hello_tpu.sh
export gist_url="https://gist.github.com/1e8d226e7a744d22d010ca4980456c3a.git"
git clone $gist_url hello_gsoc
python3 hello_gsoc/hello_tpu.py
###Output
Writing hello_tpu.sh
###Markdown
The content of `$gist_url` is the followingYou do not need to store the following file. Our script `hello_tpu.sh` will download the file to each of the hosts and run it.```python Taken from https://cloud.google.com/tpu/docs/jax-pods To be used by the Pyprobml GSoC 2021 team The following code snippet will be run on all TPU hostsimport jax The total number of TPU cores in the poddevice_count = jax.device_count() The number of TPU cores attached to this hostlocal_device_count = jax.local_device_count() The psum is performed over all mapped devices across the podxs = jax.numpy.ones(jax.local_device_count())r = jax.pmap(lambda x: jax.lax.psum(x, 'i'), axis_name='i')(xs) Print from a single host to avoid duplicated outputif jax.process_index() == 0: print('global device count:', jax.device_count()) print('local device count:', jax.local_device_count()) print('pmap result:', r)%```Next, we run the code across all workers
###Code
%%bash
source /content/commands.sh
$gtpu ssh $instance_name \
--zone $tpu_zone \
--command "$(<./hello_tpu.sh)" \
--worker all
###Output
global device count: 32
local device count: 8
pmap result: [32. 32. 32. 32. 32. 32. 32. 32.]
###Markdown
Example 2: 🚧K-nearest neighbours🚧In this example we train the MNIST dataset using the KNN algorithm `pmap`. Our program clones a Github gist into each of the hosts. We use the multi-device availability of our slice to delegate a part of the training to each of the workers.First, we create the script that will be run on each of the workers
###Code
%%writefile knn_tpu.sh
#!/bin/bash
# file: knn_tpu.sh
export gist_url="https://gist.github.com/716a7bfd4c5c0c0e1949072f7b2e03a6.git"
pip3 install -q tensorflow_datasets
git clone $gist_url demo
python3 demo/knn_tpu.py
###Output
Writing knn_tpu.sh
###Markdown
Next, we run the script
###Code
%%bash
source /content/commands.sh
$gtpu ssh $instance_name \
--zone $tpu_zone \
--command "$(<./knn_tpu.sh)" \
--worker all
###Output
(8, 10, 20)
class_rate=0.9125
|
Practice/PyTorch/tutorial/DATASETS & DATALOADERS.ipynb | ###Markdown
https://pytorch.org/tutorials/beginner/basics/data_tutorial.html
###Code
import torch
from torch.utils.data import Dataset
from torchvision import datasets
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt
training_data = datasets.FashionMNIST(
root="test_data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="test_data",
train=False,
download=True,
transform=ToTensor()
)
labels_map = {
0: "T-Shirt",
1: "Trouser",
2: "Pullover",
3: "Dress",
4: "Coat",
5: "Sandal",
6: "Shirt",
7: "Sneaker",
8: "Bag",
9: "Ankle Boot",
}
figure = plt.figure(figsize=(8, 8))
cols, rows = 3, 3
for i in range(1, cols * rows + 1):
sample_idx = torch.randint(len(training_data), size=(1,)).item()
img, label = training_data[sample_idx]
figure.add_subplot(rows, cols, i)
plt.title(labels_map[label])
plt.axis("off")
plt.imshow(img.squeeze(), cmap="gray")
plt.show()
import os
import pandas as pd
from torchvision.io import read_image
class CustomImageDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.img_labels)
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
return image, label
from torch.utils.data import DataLoader
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
# Display image and label.
train_features, train_labels = next(iter(train_dataloader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
img = train_features[0].squeeze()
label = train_labels[0]
plt.imshow(img, cmap="gray")
plt.show()
print(f"Label: {label}")
###Output
Feature batch shape: torch.Size([64, 1, 28, 28])
Labels batch shape: torch.Size([64])
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.