path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
python/cython/cython.ipynb | ###Markdown
High Performance Python CythonSimplicity of Python and efficiency of C.We can write it in notebooks by loading the cython magic.
###Code
%%cython
def hello_snippet():
"""
after loading the cython magic, we can
run the cython code (this code isn't
different from normal python code)
"""
print('hello cython')
hello_snippet()
###Output
hello cython
###Markdown
Or write it as a .pyx file and use `setup.py` to compile it:helloworld.pyx```python cython hello worlddef hello(): print('Hello, World!')```setup.py```python compiling the .pyx modulefrom distutils.core import setupfrom Cython.Build import cythonize key-value pairs that tell disutils the name of the application and which extensions it needs to build, for the cython modules, we use glob patterns, e.g. *.pyx or simply pass in the filename.pyxsetup( name = 'Hello', ext_modules = cythonize('*.pyx'),) after that run python setup.py build_ext --inplace in the command line, and we can import it like normal python modules```
###Code
from helloworld import hello
hello()
###Output
Hello, World!
###Markdown
Static TypingCython extends the Python language with static type declarations. This increases speed by not needing to do type-checks when running the program. The way we do this in Cython is by adding the `cdef` keyword.We'll write a simple program that increments j by 1 for 1000 times and compare the speed difference when adding the type declaration.
###Code
%%cython
def example():
"""simply increment j by 1 for 1000 times"""
# declare the integer type before using it
cdef int i, j = 0
for i in range(1000):
j += 1
return j
def example_py():
j = 0
for i in range(1000):
j += 1
return j
%timeit example()
%timeit example_py()
###Output
10000 loops, best of 3: 50.2 µs per loop
###Markdown
Notice the runtime difference (look at the units) FunctionsTo declare functions we use the `cpdef` keyword.
###Code
%%cython
cpdef int compute_sum(int a, int b):
return a + b
compute_sum(5, 3)
###Output
_____no_output_____
###Markdown
Notice apart from declaring the function using the `cpdef` keyword, we also specify the return type to be a integer and a two input argument to be integers.There's still an overhead to calling functions, so if the function is small and is in a computationally expensive for loop, then we can add the `inline` keyword in the function declaration. By doing this, it will replace function call solely with the function body, thus reducing the time to call the function multiple times.
###Code
%%cython
cpdef inline int compute_sum(int a, int b):
return a + b
###Output
_____no_output_____
###Markdown
NumpyTyped memoryviews allow even more efficient numpy manipulation since again, it does not incur the python overhead.
###Code
%%cython
import numpy as np
# declare memoryviews by using : in the []
cdef double[:, :] b = np.zeros((3, 3), dtype = 'float64')
b[1] = 1
# it now prints memoryview instead of original numpy array
print(b[0])
###Output
<MemoryView of 'ndarray' object>
###Markdown
Pairwise Distance ExampleWe'll start with simple version of the function that will give us a good benchmark for comparison with Cython alternatives below.
###Code
import numpy as np
def euclidean_distance(x1, x2):
dist = np.sqrt( np.sum( (x1 - x2) ** 2 ) )
return dist
def pairwise_python(X, metric = 'euclidean'):
if metric == 'euclidean':
dist_func = euclidean_distance
else:
raise ValueError("unrecognized metric")
n_samples = X.shape[0]
D = np.zeros((n_samples, n_samples))
# We could exploit symmetry to reduce the number of computations required,
# i.e. distance D[i, j] = D[j, i]
# by only looping over its upper triangle
# but we'll skip that step for now:
for i in range(n_samples):
for j in range(i + 1, n_samples):
dist = dist_func(X[i], X[j])
D[i, j] = dist
D[j, i] = dist
return D
X = np.random.random((1000, 3))
%timeit pairwise_python(X)
###Output
1 loop, best of 3: 2.96 s per loop
###Markdown
We'll try re-writing this into Cython using type memoryview. The key thing with Cython is to avoid using Python objects and function calls as much as possible, including vectorized operations on numpy arrays. This usually means writing out all of the loops by hand and operating on single array elements at a time. All the commented `.pyx` code can be found in the [github folder](https://github.com/ethen8181/machine-learning/tree/master/python/cython).
###Code
# pairwise1.pyx
from pairwise1 import pairwise1
# test optimized code on a larger matrix
X = np.random.random((5000, 3))
%timeit pairwise1(X)
###Output
1 loop, best of 3: 717 ms per loop
###Markdown
We can see the huge speedup over the pure python version! It turns out, though, that we can do even better. If we look in the code, the slicing operation when we call X[i] and X[j] must generate a new numpy array each time. So this time, we will directly slice the X array without creating new array each time.
###Code
from pairwise2 import pairwise2
%timeit pairwise2(X)
###Output
1 loop, best of 3: 256 ms per loop
###Markdown
We now try utilize Cython's parallel functionality. (For some reason can't compile the parallel version when following [Cython's documentation](http://cython.readthedocs.io/en/latest/src/userguide/parallelism.htmlcompiling) on compiling parallel version that utilizes OPENMP, will come back to this in the future). Had to take a different route by installing it as if it was a package.
###Code
from pairwise3 import pairwise3
%timeit pairwise3(X)
###Output
10 loops, best of 3: 160 ms per loop
###Markdown
To get the full advantage from Cython, it's good to know some C/C++ programming (things like void type, pointers, standard library). NumbaNumba is an LLVM compiler for python code, which allows code written in Python to be converted to highly efficient compiled code in real-time. To use it, we simply add a `@jit` (just in time compilation) decorator to our function. We can add arguments to the decorator to specify the input type, but it is recommended not to and simply let Numba decide when and how to optimize.
###Code
@jit
def pairwise_numba1(X):
M = X.shape[0]
N = X.shape[1]
D = np.zeros((M, M), dtype = np.float64)
for i in range(M):
for j in range(i + 1, M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
dist = np.sqrt(d)
D[i, j] = dist
D[j, i] = dist
return D
# a nice speedup from the raw python
# code given the little amount of
# effort that we had to put in (just
# adding the jit decorator)
%timeit pairwise_numba1(X)
###Output
1 loop, best of 3: 245 ms per loop
###Markdown
High Performance Python Cython[Cython](http://docs.cython.org/en/latest/) is a superset of the Python programming language, designed to give C-like performance with code which is mostly written in Python. In short it aims to give the simplicity of Python and efficiency of C. If you like some additional motivation to try it out consider listening to a 20 minute-ish talk from Pycon. [Youtube: Cython as a Game Changer for Efficiency](https://www.youtube.com/watch?v=_1MSX7V28Po)We can write it in notebooks by loading the cython magic.
###Code
%%cython
def hello_snippet():
"""
after loading the cython magic, we can
run the cython code (this code isn't
different from normal python code)
"""
print('hello cython')
hello_snippet()
###Output
hello cython
###Markdown
Or write it as a .pyx file and use `setup.py` to compile it:helloworld.pyx```python cython hello worlddef hello(): print('Hello, World!')```setup.py```python compiling the .pyx modulefrom distutils.core import setupfrom Cython.Build import cythonize key-value pairs that tell disutils the name of the application and which extensions it needs to build, for the cython modules, we use glob patterns, e.g. *.pyx or simply pass in the filename.pyxsetup( name = 'Hello', ext_modules = cythonize('*.pyx'),) after that run python setup.py build_ext --inplace in the command line, and we can import it like normal python modules```
###Code
from helloworld import hello
hello()
###Output
Hello, World!
###Markdown
Static TypingCython extends the Python language with static type declarations. This increases speed by not needing to do type-checks when running the program. The way we do this in Cython is by adding the `cdef` keyword.We'll write a simple program that increments j by 1 for 1000 times and compare the speed difference when adding the type declaration.
###Code
%%cython
def example():
"""simply increment j by 1 for 1000 times"""
# declare the integer type before using it
cdef int i, j = 0
for i in range(1000):
j += 1
return j
def example_py():
j = 0
for i in range(1000):
j += 1
return j
%timeit example()
%timeit example_py()
###Output
56.1 µs ± 706 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
###Markdown
Notice the runtime difference (look at the units) FunctionsTo declare functions we use the `cpdef` keyword.
###Code
%%cython
cpdef int compute_sum(int a, int b):
return a + b
compute_sum(5, 3)
###Output
_____no_output_____
###Markdown
Notice apart from declaring the function using the `cpdef` keyword, we also specify the return type to be a integer and a two input argument to be integers.There's still an overhead to calling functions, so if the function is small and is in a computationally expensive for loop, then we can add the `inline` keyword in the function declaration. By doing this, it will replace function call solely with the function body, thus reducing the time to call the function multiple times.
###Code
%%cython
cpdef inline int compute_sum(int a, int b):
return a + b
###Output
_____no_output_____
###Markdown
NumpyTyped memoryviews allow even more efficient numpy manipulation since again, it does not incur the python overhead.
###Code
%%cython
import numpy as np
# declare memoryviews by using : in the []
cdef double[:, :] b = np.zeros((3, 3), dtype = 'float64')
b[1] = 1
# it now prints memoryview instead of original numpy array
print(b[0])
###Output
<MemoryView of 'ndarray' object>
###Markdown
Pairwise Distance ExampleWe'll start with simple version of the function that will give us a good benchmark for comparison with Cython alternatives below.
###Code
import numpy as np
def euclidean_distance(x1, x2):
dist = np.sqrt(np.sum((x1 - x2) ** 2))
return dist
def pairwise_python(X, metric = 'euclidean'):
if metric == 'euclidean':
dist_func = euclidean_distance
else:
raise ValueError("unrecognized metric")
n_samples = X.shape[0]
D = np.zeros((n_samples, n_samples))
# We could exploit symmetry to reduce the number of computations required,
# i.e. distance D[i, j] = D[j, i]
# by only looping over its upper triangle
# but we'll skip that step for now:
for i in range(n_samples):
for j in range(i + 1, n_samples):
dist = dist_func(X[i], X[j])
D[i, j] = dist
D[j, i] = dist
return D
X = np.random.random((1000, 3))
%timeit pairwise_python(X)
###Output
4.17 s ± 48.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
We'll try re-writing this into Cython using type memoryview. The key thing with Cython is to avoid using Python objects and function calls as much as possible, including vectorized operations on numpy arrays. This usually means writing out all of the loops by hand and operating on single array elements at a time. All the commented `.pyx` code can be found in the [github folder](https://github.com/ethen8181/machine-learning/tree/master/python/cython). You can simply run `python setup.py install` to install `pairwise1.pyx` and `pairwise2.pyx`.
###Code
# pairwise1.pyx
from pairwise1 import pairwise1
# test optimized code on a larger matrix
X = np.random.random((5000, 3))
%timeit pairwise1(X)
###Output
725 ms ± 15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
We can see the huge speedup over the pure python version! It turns out, though, that we can do even better. If we look in the code, the slicing operation when we call X[i] and X[j] must generate a new numpy array each time. So this time, we will directly slice the X array without creating new array each time.
###Code
# pairwise2.pyx
from pairwise2 import pairwise2
%timeit pairwise2(X)
###Output
267 ms ± 14.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
We now try utilize Cython's parallel functionality. For some reason can't compile the parallel version when following [Cython's documentation](http://cython.readthedocs.io/en/latest/src/userguide/parallelism.htmlcompiling) on compiling parallel version that utilizes OPENMP (a multithreading API), will come back to this in the future. Had to take a different route by installing it as if it was a package. You can simply run `python setup_parallel.py install` to install `pairwise3.pyx`.
###Code
# pairwise3.pyx
from pairwise3 import pairwise3
%timeit pairwise3(X)
###Output
154 ms ± 2.71 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
To get the full advantage from Cython, it's good to know some C/C++ programming (things like void type, pointers, standard library). Numba[Numba](https://numba.pydata.org/) is an LLVM compiler for python code, which allows code written in Python to be converted to highly efficient compiled code in real-time. To use it, we simply add a `@jit` (just in time compilation) decorator to our function. We can add arguments to the decorator to specify the input type, but it is recommended not to and simply let Numba decide when and how to optimize.
###Code
@jit
def pairwise_numba1(X):
M = X.shape[0]
N = X.shape[1]
D = np.zeros((M, M), dtype = np.float64)
for i in range(M):
for j in range(i + 1, M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
dist = np.sqrt(d)
D[i, j] = dist
D[j, i] = dist
return D
# a nice speedup from the raw python code given the
# little amount of effort that we had to put in
# (just adding the jit decorator)
%timeit pairwise_numba1(X)
###Output
248 ms ± 5.91 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
The `@jit` decorator tells Numba to compile the function. The argument types will be inferred by Numba when function is called. If Numba can't infer the types, it will fall back to a python object; When this happens, we probably won't see any significant speed up. The numba documentation lists out what [python](http://numba.pydata.org/numba-doc/dev/reference/pysupported.html) and [numpy](http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html) features are supported.A number of keyword-only arguments can be passed to the `@jit` decorator. e.g. `nopython`. Numba has two compilation modes: nopython mode and object mode. The former produces much faster code, but has limitations that can force Numba to fall back to the latter. To prevent Numba from falling back, and instead raise an error, pass `nopython = True` to the decorator, so it becomes @jit(nopython = True). Or we can be even lazier and simply use the `@njit` decorator.The latest version (released around mid July 2017) `0.34.0` also allows use to write parallel code by specifying the `parallel = True` argument to the decorator and changing `range` to `prange` to perform explicit parallel loops. Note that we must ensure the loop does not have cross iteration dependencies.
###Code
@njit(parallel = True)
def pairwise_numba2(X):
M = X.shape[0]
N = X.shape[1]
D = np.zeros((M, M), dtype = np.float64)
for i in prange(M):
for j in range(i + 1, M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
dist = np.sqrt(d)
D[i, j] = dist
D[j, i] = dist
return D
%timeit pairwise_numba2(X)
###Output
106 ms ± 1.92 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Note that we add the `@njit` decorator, we are marking a function for optimization by Numba's JIT (just in time) compiler, meaning the python code is compiled on the fly into optimized machine code during the first time we invoke the function call. In other words, we can see some additional speed boost the next time we call the function since we won't have the initial compilation overhead.
###Code
%timeit pairwise_numba2(X)
###Output
108 ms ± 2.24 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Table of Contents1 High Performance Python1.1 Cython1.1.1 Static Typing1.1.2 Functions1.1.3 Numpy1.2 Pairwise Distance Example1.3 Numba2 Reference
###Code
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style = False)
os.chdir(path)
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
# 3. to use cython
%load_ext watermark
%load_ext autoreload
%autoreload 2
%load_ext cython
import numpy as np
from numba import jit, njit, prange
%watermark -a 'Ethen' -d -t -v -p numpy,cython,numba
###Output
Ethen 2017-08-25 10:26:10
CPython 3.5.2
IPython 5.4.1
numpy 1.13.1
cython 0.25.2
numba 0.34.0
###Markdown
High Performance Python Cython [Cython](http://docs.cython.org/en/latest/) is a superset of the Python programming language, designed to give C-like performance with code which is mostly written in Python. In short it aims to give the simplicity of Python and efficiency of C. If you like some additional motivation to try it out consider listening to a 20 minute-ish talk from Pycon. [Youtube: Cython as a Game Changer for Efficiency](https://www.youtube.com/watch?v=_1MSX7V28Po)We can write it in notebooks by loading the cython magic.
###Code
%%cython
def hello_snippet():
"""
after loading the cython magic, we can
run the cython code (this code isn't
different from normal python code)
"""
print('hello cython')
hello_snippet()
###Output
hello cython
###Markdown
Or write it as a .pyx file and use `setup.py` to compile it:helloworld.pyx```python cython hello worlddef hello(): print('Hello, World!')```setup.py```python compiling the .pyx modulefrom distutils.core import setupfrom Cython.Build import cythonize key-value pairs that tell disutils the name of the application and which extensions it needs to build, for the cython modules, we use glob patterns, e.g. *.pyx or simply pass in the filename.pyxsetup( name = 'Hello', ext_modules = cythonize('*.pyx'),) after that run python setup.py build_ext --inplace in the command line, and we can import it like normal python modules```
###Code
from helloworld import hello
hello()
###Output
Hello, World!
###Markdown
Static Typing Cython extends the Python language with static type declarations. This increases speed by not needing to do type-checks when running the program. The way we do this in Cython is by adding the `cdef` keyword.We'll write a simple program that increments j by 1 for 1000 times and compare the speed difference when adding the type declaration.
###Code
%%cython
def example():
"""simply increment j by 1 for 1000 times"""
# declare the integer type before using it
cdef int i, j = 0
for i in range(1000):
j += 1
return j
def example_py():
j = 0
for i in range(1000):
j += 1
return j
%timeit example()
%timeit example_py()
###Output
10000 loops, best of 3: 49.4 µs per loop
###Markdown
Notice the runtime difference (look at the units) Functions To declare functions we use the `cpdef` keyword.
###Code
%%cython
cpdef int compute_sum(int a, int b):
return a + b
compute_sum(5, 3)
###Output
_____no_output_____
###Markdown
Notice apart from declaring the function using the `cpdef` keyword, we also specify the return type to be a integer and a two input argument to be integers.There's still an overhead to calling functions, so if the function is small and is in a computationally expensive for loop, then we can add the `inline` keyword in the function declaration. By doing this, it will replace function call solely with the function body, thus reducing the time to call the function multiple times.
###Code
%%cython
cpdef inline int compute_sum(int a, int b):
return a + b
###Output
_____no_output_____
###Markdown
Numpy Typed memoryviews allow even more efficient numpy manipulation since again, it does not incur the python overhead.
###Code
%%cython
import numpy as np
# declare memoryviews by using : in the []
cdef double[:, :] b = np.zeros((3, 3), dtype = 'float64')
b[1] = 1
# it now prints memoryview instead of original numpy array
print(b[0])
###Output
<MemoryView of 'ndarray' object>
###Markdown
Pairwise Distance Example We'll start by implementing a pure python version of the function that will give us a good benchmark for comparison with Cython alternatives below.
###Code
import numpy as np
def euclidean_distance(x1, x2):
dist = np.sqrt(np.sum((x1 - x2) ** 2))
return dist
def pairwise_python(X, metric = 'euclidean'):
if metric == 'euclidean':
dist_func = euclidean_distance
else:
raise ValueError("unrecognized metric")
n_samples = X.shape[0]
D = np.zeros((n_samples, n_samples))
# We could exploit symmetry to reduce the number of computations required,
# i.e. distance D[i, j] = D[j, i]
# by only looping over its upper triangle
for i in range(n_samples):
for j in range(i + 1, n_samples):
dist = dist_func(X[i], X[j])
D[i, j] = dist
D[j, i] = dist
return D
X = np.random.random((1000, 3))
%timeit pairwise_python(X)
###Output
1 loop, best of 3: 3.53 s per loop
###Markdown
We'll try re-writing this into Cython using type memoryview. The key thing with Cython is to avoid using Python objects and function calls as much as possible, including vectorized operations on numpy arrays. This usually means writing out all of the loops by hand and operating on single array elements at a time. All the commented `.pyx` code can be found in the [github folder](https://github.com/ethen8181/machine-learning/tree/master/python/cython). You can simply run `python setup.py install` to install `pairwise1.pyx` and `pairwise2.pyx`.
###Code
# pairwise1.pyx
from pairwise1 import pairwise1
# test optimized code on a larger matrix
X = np.random.random((5000, 3))
%timeit pairwise1(X)
###Output
1 loop, best of 3: 607 ms per loop
###Markdown
We can see the huge speedup over the pure python version! It turns out, though, that we can do even better. If we look in the code, the slicing operation when we call X[i] and X[j] must generate a new numpy array each time. So this time, we will directly slice the X array without creating new array each time.
###Code
# pairwise2.pyx
from pairwise2 import pairwise2
%timeit pairwise2(X)
###Output
1 loop, best of 3: 231 ms per loop
###Markdown
We now try utilize Cython's parallel functionality. For some reason can't compile the parallel version when following [Cython's documentation](http://cython.readthedocs.io/en/latest/src/userguide/parallelism.htmlcompiling) on compiling parallel version that utilizes OPENMP (a multithreading API), will come back to this in the future. Had to take a different route by installing it as if it was a package. You can simply run `python setup_parallel.py install` to install `pairwise3.pyx`.
###Code
# pairwise3.pyx
from pairwise3 import pairwise3
%timeit pairwise3(X)
###Output
10 loops, best of 3: 88.9 ms per loop
###Markdown
We've touch upon an exmaple of utilizing Cython to speed up or CPU intensive numerical operations. Though, to get the full advantage out of Cython, it's still good to know some C/C++ programming (things like void type, pointers, standard library). Numba [Numba](https://numba.pydata.org/) is an LLVM compiler for python code, which allows code written in Python to be converted to highly efficient compiled code in real-time. To use it, we simply add a `@jit` (just in time compilation) decorator to our function. We can add arguments to the decorator to specify the input type, but it is recommended not to and simply let Numba decide when and how to optimize.
###Code
@jit
def pairwise_numba1(X):
M = X.shape[0]
N = X.shape[1]
D = np.zeros((M, M), dtype = np.float64)
for i in range(M):
for j in range(i + 1, M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
dist = np.sqrt(d)
D[i, j] = dist
D[j, i] = dist
return D
# a nice speedup from the raw python code given the
# little amount of effort that we had to put in
# (just adding the jit decorator)
%timeit pairwise_numba1(X)
###Output
1 loop, best of 3: 236 ms per loop
###Markdown
The `@jit` decorator tells Numba to compile the function. The argument types will be inferred by Numba when function is called. If Numba can't infer the types, it will fall back to a python object; When this happens, we probably won't see any significant speed up. The numba documentation lists out what [python](http://numba.pydata.org/numba-doc/dev/reference/pysupported.html) and [numpy](http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html) features are supported.A number of keyword-only arguments can be passed to the `@jit` decorator. e.g. `nopython`. Numba has two compilation modes: nopython mode and object mode. The former produces much faster code, but has limitations that can force Numba to fall back to the latter. To prevent Numba from falling back, and instead raise an error, pass `nopython = True` to the decorator, so it becomes @jit(nopython = True). Or we can be even lazier and simply use the `@njit` decorator.The latest version (released around mid July 2017) `0.34.0` also allows use to write parallel code by specifying the `parallel = True` argument to the decorator and changing `range` to `prange` to perform explicit parallel loops. Note that we must ensure the loop does not have cross iteration dependencies.
###Code
@njit(parallel = True)
def pairwise_numba2(X):
M = X.shape[0]
N = X.shape[1]
D = np.zeros((M, M), dtype = np.float64)
for i in prange(M):
for j in range(i + 1, M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
dist = np.sqrt(d)
D[i, j] = dist
D[j, i] = dist
return D
%timeit pairwise_numba2(X)
###Output
The slowest run took 5.27 times longer than the fastest. This could mean that an intermediate result is being cached.
1 loop, best of 3: 104 ms per loop
###Markdown
Note that we add the `@njit` decorator, we are marking a function for optimization by Numba's JIT (just in time) compiler, meaning the python code is compiled on the fly into optimized machine code during the first time we invoke the function call. In other words, we can see some additional speed boost the next time we call the function since we won't have the initial compilation overhead.
###Code
%timeit pairwise_numba2(X)
###Output
10 loops, best of 3: 105 ms per loop
###Markdown
High Performance Python Cython[Cython](http://docs.cython.org/en/latest/) is a superset of the Python programming language, designed to give C-like performance with code which is mostly written in Python. In short it aims to give the simplicity of Python and efficiency of C. If you like some additional motivation to try it out consider listening to a 20 minute-ish talk from Pycon. [Youtube: Cython as a Game Changer for Efficiency](https://www.youtube.com/watch?v=_1MSX7V28Po)We can write it in notebooks by loading the cython magic.
###Code
%%cython
def hello_snippet():
"""
after loading the cython magic, we can
run the cython code (this code isn't
different from normal python code)
"""
print('hello cython')
hello_snippet()
###Output
hello cython
###Markdown
Or write it as a .pyx file and use `setup.py` to compile it:helloworld.pyx```python cython hello worlddef hello(): print('Hello, World!')```setup.py```python compiling the .pyx modulefrom distutils.core import setupfrom Cython.Build import cythonize key-value pairs that tell disutils the name of the application and which extensions it needs to build, for the cython modules, we use glob patterns, e.g. *.pyx or simply pass in the filename.pyxsetup( name = 'Hello', ext_modules = cythonize('*.pyx'),) after that run python setup.py build_ext --inplace in the command line, and we can import it like normal python modules```
###Code
from helloworld import hello
hello()
###Output
Hello, World!
###Markdown
Static TypingCython extends the Python language with static type declarations. This increases speed by not needing to do type-checks when running the program. The way we do this in Cython is by adding the `cdef` keyword.We'll write a simple program that increments j by 1 for 1000 times and compare the speed difference when adding the type declaration.
###Code
%%cython
def example():
"""simply increment j by 1 for 1000 times"""
# declare the integer type before using it
cdef int i, j = 0
for i in range(1000):
j += 1
return j
def example_py():
j = 0
for i in range(1000):
j += 1
return j
%timeit example()
%timeit example_py()
###Output
10000 loops, best of 3: 49.4 µs per loop
###Markdown
Notice the runtime difference (look at the units) FunctionsTo declare functions we use the `cpdef` keyword.
###Code
%%cython
cpdef int compute_sum(int a, int b):
return a + b
compute_sum(5, 3)
###Output
_____no_output_____
###Markdown
Notice apart from declaring the function using the `cpdef` keyword, we also specify the return type to be a integer and a two input argument to be integers.There's still an overhead to calling functions, so if the function is small and is in a computationally expensive for loop, then we can add the `inline` keyword in the function declaration. By doing this, it will replace function call solely with the function body, thus reducing the time to call the function multiple times.
###Code
%%cython
cpdef inline int compute_sum(int a, int b):
return a + b
###Output
_____no_output_____
###Markdown
NumpyTyped memoryviews allow even more efficient numpy manipulation since again, it does not incur the python overhead.
###Code
%%cython
import numpy as np
# declare memoryviews by using : in the []
cdef double[:, :] b = np.zeros((3, 3), dtype = 'float64')
b[1] = 1
# it now prints memoryview instead of original numpy array
print(b[0])
###Output
<MemoryView of 'ndarray' object>
###Markdown
Pairwise Distance ExampleWe'll start with simple version of the function that will give us a good benchmark for comparison with Cython alternatives below.
###Code
import numpy as np
def euclidean_distance(x1, x2):
dist = np.sqrt(np.sum((x1 - x2) ** 2))
return dist
def pairwise_python(X, metric = 'euclidean'):
if metric == 'euclidean':
dist_func = euclidean_distance
else:
raise ValueError("unrecognized metric")
n_samples = X.shape[0]
D = np.zeros((n_samples, n_samples))
# We could exploit symmetry to reduce the number of computations required,
# i.e. distance D[i, j] = D[j, i]
# by only looping over its upper triangle
for i in range(n_samples):
for j in range(i + 1, n_samples):
dist = dist_func(X[i], X[j])
D[i, j] = dist
D[j, i] = dist
return D
X = np.random.random((1000, 3))
%timeit pairwise_python(X)
###Output
1 loop, best of 3: 3.53 s per loop
###Markdown
We'll try re-writing this into Cython using type memoryview. The key thing with Cython is to avoid using Python objects and function calls as much as possible, including vectorized operations on numpy arrays. This usually means writing out all of the loops by hand and operating on single array elements at a time. All the commented `.pyx` code can be found in the [github folder](https://github.com/ethen8181/machine-learning/tree/master/python/cython). You can simply run `python setup.py install` to install `pairwise1.pyx` and `pairwise2.pyx`.
###Code
# pairwise1.pyx
from pairwise1 import pairwise1
# test optimized code on a larger matrix
X = np.random.random((5000, 3))
%timeit pairwise1(X)
###Output
1 loop, best of 3: 607 ms per loop
###Markdown
We can see the huge speedup over the pure python version! It turns out, though, that we can do even better. If we look in the code, the slicing operation when we call X[i] and X[j] must generate a new numpy array each time. So this time, we will directly slice the X array without creating new array each time.
###Code
# pairwise2.pyx
from pairwise2 import pairwise2
%timeit pairwise2(X)
###Output
1 loop, best of 3: 231 ms per loop
###Markdown
We now try utilize Cython's parallel functionality. For some reason can't compile the parallel version when following [Cython's documentation](http://cython.readthedocs.io/en/latest/src/userguide/parallelism.htmlcompiling) on compiling parallel version that utilizes OPENMP (a multithreading API), will come back to this in the future. Had to take a different route by installing it as if it was a package. You can simply run `python setup_parallel.py install` to install `pairwise3.pyx`.
###Code
# pairwise3.pyx
from pairwise3 import pairwise3
%timeit pairwise3(X)
###Output
10 loops, best of 3: 88.9 ms per loop
###Markdown
We've touch upon an exmaple of utilizing Cython to speed up or CPU intensive numerical operations. Though, to get the full advantage out of Cython, it's still good to know some C/C++ programming (things like void type, pointers, standard library). Numba[Numba](https://numba.pydata.org/) is an LLVM compiler for python code, which allows code written in Python to be converted to highly efficient compiled code in real-time. To use it, we simply add a `@jit` (just in time compilation) decorator to our function. We can add arguments to the decorator to specify the input type, but it is recommended not to and simply let Numba decide when and how to optimize.
###Code
@jit
def pairwise_numba1(X):
M = X.shape[0]
N = X.shape[1]
D = np.zeros((M, M), dtype = np.float64)
for i in range(M):
for j in range(i + 1, M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
dist = np.sqrt(d)
D[i, j] = dist
D[j, i] = dist
return D
# a nice speedup from the raw python code given the
# little amount of effort that we had to put in
# (just adding the jit decorator)
%timeit pairwise_numba1(X)
###Output
1 loop, best of 3: 236 ms per loop
###Markdown
The `@jit` decorator tells Numba to compile the function. The argument types will be inferred by Numba when function is called. If Numba can't infer the types, it will fall back to a python object; When this happens, we probably won't see any significant speed up. The numba documentation lists out what [python](http://numba.pydata.org/numba-doc/dev/reference/pysupported.html) and [numpy](http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html) features are supported.A number of keyword-only arguments can be passed to the `@jit` decorator. e.g. `nopython`. Numba has two compilation modes: nopython mode and object mode. The former produces much faster code, but has limitations that can force Numba to fall back to the latter. To prevent Numba from falling back, and instead raise an error, pass `nopython = True` to the decorator, so it becomes @jit(nopython = True). Or we can be even lazier and simply use the `@njit` decorator.The latest version (released around mid July 2017) `0.34.0` also allows use to write parallel code by specifying the `parallel = True` argument to the decorator and changing `range` to `prange` to perform explicit parallel loops. Note that we must ensure the loop does not have cross iteration dependencies.
###Code
@njit(parallel = True)
def pairwise_numba2(X):
M = X.shape[0]
N = X.shape[1]
D = np.zeros((M, M), dtype = np.float64)
for i in prange(M):
for j in range(i + 1, M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
dist = np.sqrt(d)
D[i, j] = dist
D[j, i] = dist
return D
%timeit pairwise_numba2(X)
###Output
The slowest run took 5.27 times longer than the fastest. This could mean that an intermediate result is being cached.
1 loop, best of 3: 104 ms per loop
###Markdown
Note that we add the `@njit` decorator, we are marking a function for optimization by Numba's JIT (just in time) compiler, meaning the python code is compiled on the fly into optimized machine code during the first time we invoke the function call. In other words, we can see some additional speed boost the next time we call the function since we won't have the initial compilation overhead.
###Code
%timeit pairwise_numba2(X)
###Output
10 loops, best of 3: 105 ms per loop
|
Pre-training_Post-training_and_Joint_Optimization/CNN2/.ipynb_checkpoints/CNN_2_Mnist_V1-checkpoint.ipynb | ###Markdown
Integer Only
###Code
def Quant_integer(model_name, filename):
try:
model = tf.keras.models.load_model(model_name)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
mnist_train, _ = tf.keras.datasets.mnist.load_data()
images = tf.cast(mnist_train[0], tf.float32) / 255.0
mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)
def representative_data_gen():
for input_value in mnist_ds.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 # or tf.uint8
converter.inference_output_type = tf.int8 # or tf.uint8
tflite_int_quant_model = converter.convert()
filename = filename+'.tflite'
tflite_models_dir = pathlib.Path("tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_integeronly_file = tflite_models_dir/filename
tflite_model_integeronly_file.write_bytes(tflite_int_quant_model)
return f'Converted - path {tflite_model_integeronly_file}'
except Exception as e:
return str(e)
Quant_integer('./1_mnist_model.h5', '13_fashion_mnist_integeronly_model')
Quant_integer('./2_mnist_model_pruning.h5', '14_mnist_Integeronly_pruning_model')
Quant_integer('3_mnist_model_qaware.h5','15_mnist_qaware_integer_model')
converter = tf.lite.TFLiteConverter.from_keras_model(q_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
mnist_train, _ = tf.keras.datasets.mnist.load_data()
images = tf.cast(mnist_train[0], tf.float32) / 255.0
mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)
def representative_data_gen():
for input_value in mnist_ds.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 # or tf.uint8
converter.inference_output_type = tf.int8 # or tf.uint8
tflite_int_quant_model = converter.convert()
filename = filename+'.tflite'
tflite_models_dir = pathlib.Path("tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_integeronly_file = tflite_models_dir/filename
tflite_model_integeronly_file.write_bytes(tflite_int_quant_model)
###Output
INFO:tensorflow:Assets written to: /tmp/tmpy49q8vs1/assets
###Markdown
Evalvate Model
###Code
import time
###Output
_____no_output_____
###Markdown
Keras model Evaluation
###Code
def evaluate_keras_model_single_unit(model_path):
start_time_infer = time.time()
model = tf.keras.models.load_model(model_path, compile = True)
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
data = X_test[0]
data = data.reshape((1, 28, 28))
data_y = y_test[0:1]
score = model.evaluate(data, data_y, verbose=0)
result = {'Time to single unit infer': (time.time() - start_time_infer),
'Score' : score[1]}
return result
evaluate_keras_model_single_unit('./1_mnist_model.h5')
evaluate_keras_model_single_unit('./2_mnist_model_pruning.h5')
def evaluate_keras_model_test_set(model_path):
start_time_infer = time.time()
model = tf.keras.models.load_model(model_path, compile = True)
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
score = score = model.evaluate(X_test, y_test, verbose =0)
result = {'Time to single unit infer': (time.time() - start_time_infer),
'Score' : score[1]}
return result
evaluate_keras_model_test_set('./1_mnist_model.h5')
evaluate_keras_model_test_set('./2_mnist_model_pruning.h5')
###Output
WARNING:tensorflow:No training configuration found in the save file, so the model was *not* compiled. Compile it manually.
###Markdown
TF Lite Model Evaluvation
###Code
# Evaluate the mode
def evaluate_tflite_model_test_set(interpreter):
start_time = time.time()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in X_test:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == y_test[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
results = {'time': (time.time() - start_time),
'accuracy': accuracy}
return results
###Output
_____no_output_____
###Markdown
TF Lite Models
###Code
# TF Lite
tflite_model_file = 'tflite_models/4_mnist_model.tflite'
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
evaluate_tflite_model_test_set(interpreter)
# Purning TF Lite
tflite_pruning_model_file = 'tflite_models/5_mnist_pruning_model.tflite'
interpreter_pruning = tf.lite.Interpreter(model_path=str(tflite_pruning_model_file))
interpreter_pruning.allocate_tensors()
evaluate_tflite_model_test_set(interpreter_pruning)
# Qaware Model
tflite_model_file = '6_mnist_model_qaware.tflite'
interpreter_qaware = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter_qaware.allocate_tensors()
evaluate_tflite_model_test_set(interpreter_qaware)
###Output
_____no_output_____
###Markdown
Integer Float TF Lite models
###Code
# TF Lite
tflite_model_file = 'tflite_models/7_mnist_Integer_float_model.tflite'
interpreter_int_float = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter_int_float.allocate_tensors()
evaluate_tflite_model_test_set(interpreter_int_float)
# Purning TF Lite
tflite_pruning_model_file = 'tflite_models/8_mnist_pruning_Integer_float_model.tflite'
interpreter_int_float_pruning = tf.lite.Interpreter(model_path=str(tflite_pruning_model_file))
interpreter_int_float_pruning.allocate_tensors()
evaluate_tflite_model_test_set(interpreter_int_float_pruning)
# Q-aware TF Lite
tflite_qaware_model_file = 'tflite_models/9_mnist_Qaware_Integer_float_model.tflite'
interpreter_tflite_qaware = tf.lite.Interpreter(model_path=str(tflite_qaware_model_file))
interpreter_tflite_qaware.allocate_tensors()
evaluate_tflite_model_test_set(interpreter_tflite_qaware)
###Output
_____no_output_____
###Markdown
Float Tflite
###Code
# TF Lite
tflite_model_file = 'tflite_models/10_mnist_float16_model.tflite'
interpreter_float = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter_float.allocate_tensors()
evaluate_tflite_model_test_set(interpreter_float)
# Purning TF Lite
tflite_pruning_model_file = 'tflite_models/11_mnist_float_pruning_model.tflite'
interpreter_float_pruning = tf.lite.Interpreter(model_path=str(tflite_pruning_model_file))
interpreter_float_pruning.allocate_tensors()
evaluate_tflite_model_test_set(interpreter_float_pruning)
tflite_qaware_model_file = 'tflite_models/12_mnist_Qaware_float16_model.tflite'
interpreter_tflite_qaware = tf.lite.Interpreter(model_path=str(tflite_qaware_model_file))
interpreter_tflite_qaware.allocate_tensors()
evaluate_tflite_model_test_set(interpreter_tflite_qaware)
###Output
_____no_output_____
###Markdown
Integer Only TFlite
###Code
# TF Lite
tflite_model_file = 'tflite_models/13_mnist_integeronly_model.tflite'
interpreter_int = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter_int.allocate_tensors()
evaluate_tflite_model_test_set(interpreter_int)
# Purning TF Lite
tflite_pruning_model_file = 'tflite_models/14_mnist_Integeronly_pruning_model.tflite'
interpreter_int_pruning = tf.lite.Interpreter(model_path=str(tflite_pruning_model_file))
interpreter_int_pruning.allocate_tensors()
evaluate_tflite_model_test_set(interpreter_int_pruning)
###Output
_____no_output_____
###Markdown
Single unit Evaluate
###Code
# Evaluate the mode
def evaluate_tflite_model_single_unit(interpreter):
start_time = time.time()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
test_image = np.expand_dims(X_test[0], axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
results = {'time': (time.time() - start_time)}
return results
# TF Lite
evaluate_tflite_model_single_unit(interpreter)
evaluate_tflite_model_single_unit(interpreter_pruning)
evaluate_tflite_model_single_unit(interpreter_int_float)
evaluate_tflite_model_single_unit(interpreter_qaware)
evaluate_tflite_model_single_unit(interpreter_int_float_pruning)
evaluate_tflite_model_single_unit(interpreter_float)
evaluate_tflite_model_single_unit(interpreter_float_pruning)
evaluate_tflite_model_single_unit(interpreter_int)
evaluate_tflite_model_single_unit(interpreter_float_pruning)
###Output
_____no_output_____
###Markdown
Import Package
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import traceback
import contextlib
import pathlib
###Output
_____no_output_____
###Markdown
Load Dataset
###Code
mnist = tf.keras.datasets.mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("Train Image shape:", X_train.shape, "Test Image shape:", X_test.shape)
# Normalize the images
X_train = X_train / 255.0
X_test = X_test / 255.0
###Output
_____no_output_____
###Markdown
Conv2D Base Model
###Code
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Model summary
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape (Reshape) (None, 28, 28, 1) 0
_________________________________________________________________
conv2d (Conv2D) (None, 26, 26, 12) 120
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 12) 0
_________________________________________________________________
flatten (Flatten) (None, 2028) 0
_________________________________________________________________
dense (Dense) (None, 10) 20290
=================================================================
Total params: 20,410
Trainable params: 20,410
Non-trainable params: 0
_________________________________________________________________
###Markdown
Train Conv2D Base Model
###Code
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(X_train,
y_train,
batch_size=64,
epochs=10,
validation_data=(X_test, y_test))
# Saving Model
model.save('1_mnist_model.h5')
# Evaluate the model on test set
score = model.evaluate(X_test, y_test, verbose=0)
# Print test accuracy
print('\n', 'Test accuracy:', score[1])
###Output
Test accuracy: 0.9787999987602234
###Markdown
Train model with pruning
###Code
! pip install -q tensorflow-model-optimization
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
# Compute end step to finish pruning after 2 epochs.
batch_size = 128
epochs = 40
validation_split = 0.1 # 10% of training set will be used for validation set.
num_images = X_train.shape[0] * (1 - validation_split)
end_step = np.ceil(num_images / batch_size).astype(np.int32) * epochs
# Define model for pruning.
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50,
final_sparsity=0.80,
begin_step=0,
end_step=end_step)
}
model_for_pruning = prune_low_magnitude(model, **pruning_params)
# `prune_low_magnitude` requires a recompile.
model_for_pruning.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_for_pruning.summary()
X_train.shape
y_train.shape
mnist = tf.keras.datasets.mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("Train Image shape:", X_train.shape, "Test Image shape:", X_test.shape)
# Normalize the images
X_train = X_train / 255.0
X_test = X_test / 255.0
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
tfmot.sparsity.keras.PruningSummaries(log_dir='log'),
]
model_for_pruning.fit(X_train, y_train,
batch_size=batch_size, epochs=epochs, validation_split=validation_split,
callbacks=callbacks)
_, model_for_pruning_accuracy = model_for_pruning.evaluate(
X_train, y_train, verbose=0)
print('Pruned test accuracy:', model_for_pruning_accuracy)
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
tf.keras.models.save_model(model_for_export, '2_mnist_model_pruning.h5', include_optimizer=False)
###Output
_____no_output_____
###Markdown
Q-aware Training
###Code
import tensorflow_model_optimization as tfmot
quantize_model = tfmot.quantization.keras.quantize_model
# q_aware stands for for quantization aware.
q_aware_model = quantize_model(model)
# `quantize_model` requires a recompile.
q_aware_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
q_aware_model.summary()
# Train and evaluate the model against baseline
train_images_subset = X_train[0:1000] # out of 60000
train_labels_subset = y_train[0:1000]
q_aware_model.fit(train_images_subset, train_labels_subset,
batch_size=10, epochs=50, validation_split=0.1)
# Evaluate the model on test set
score = q_aware_model.evaluate(X_test, y_test, verbose=0)
# Print test accuracy
print('\n', 'Test accuracy:', score[1])
q_aware_model.save('3_mnist_model_qaware.h5')
###Output
_____no_output_____
###Markdown
Convert Model to TFLite
###Code
def ConvertTFLite(model_path, filename):
try:
# Loading Model
model = tf.keras.models.load_model(model_path)
# Converter
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
#Specify path
tflite_models_dir = pathlib.Path("tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
filename = filename+".tflite"
tflite_model_file = tflite_models_dir/filename
# Save Model
tflite_model_file.write_bytes(tflite_model)
return f'Converted to TFLite, path {tflite_model_file}'
except Exception as e:
return str(e)
ConvertTFLite('./1_mnist_model.h5','4_mnist_model')
ConvertTFLite('./2_mnist_model_pruning.h5','5_mnist_pruning_model')
converter = tf.lite.TFLiteConverter.from_keras_model(q_aware_model)
quantized_tflite_model = converter.convert()
quantized_aware_tflite_file = '6_mnist_model_qaware.tflite'
with open(quantized_aware_tflite_file, 'wb') as f:
f.write(quantized_tflite_model)
print('Saved quvantaised aware TFLite model to:', quantized_aware_tflite_file)
###Output
INFO:tensorflow:Assets written to: /tmp/tmpdel1wugi/assets
###Markdown
Integer with Float fallback quantaization
###Code
def Quant_int_with_float(model_name, filename):
try:
model = tf.keras.models.load_model(model_name)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
mnist_train, _ = tf.keras.datasets.mnist.load_data()
images = tf.cast(mnist_train[0], tf.float32) / 255.0
mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)
def representative_data_gen():
for input_value in mnist_ds.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
tflite_model_quant = converter.convert()
filename = filename+'.tflite'
tflite_models_dir = pathlib.Path("tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_quant_file = tflite_models_dir/filename
tflite_model_quant_file.write_bytes(tflite_model_quant)
return f'Converted - path {tflite_model_quant_file}'
except Exception as e:
return str(e)
Quant_int_with_float('./1_mnist_model.h5', '7_mnist_Integer_float_model')
Quant_int_with_float('./2_mnist_model_pruning.h5','8_mnist_pruning_Integer_float_model')
converter = tf.lite.TFLiteConverter.from_keras_model(q_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
mnist_train, _ = tf.keras.datasets.mnist.load_data()
images = tf.cast(mnist_train[0], tf.float32) / 255.0
mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)
def representative_data_gen():
for input_value in mnist_ds.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
quantized_tflite_model = converter.convert()
tflite_models_dir = pathlib.Path("tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_quant_file = tflite_models_dir/"9_mnist_Qaware_Integer_float_model.tflite"
tflite_model_quant_file.write_bytes(quantized_tflite_model)
###Output
INFO:tensorflow:Assets written to: /tmp/tmp6y9831wf/assets
###Markdown
Float 16 Quantization
###Code
def Quant_float(model_name, filename):
try:
model = tf.keras.models.load_model(model_name)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_fp16_model = converter.convert()
filename = filename+'.tflite'
tflite_models_fp16_dir = pathlib.Path("tflite_models/")
tflite_models_fp16_dir.mkdir(exist_ok=True, parents=True)
tflite_model_fp16_file = tflite_models_fp16_dir/filename
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
return f'Converted - path {tflite_model_fp16_file}'
except Exception as e:
return str(e)
Quant_float('./1_mnist_model.h5', '10_mnist_float16_model')
Quant_float('./2_mnist_model_pruning.h5', '11_mnist_float_pruning_model')
Quant_float('./mnist_model_sperable.h5','mnist_sperable_float_model')
converter = tf.lite.TFLiteConverter.from_keras_model(q_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_fp16_model = converter.convert()
tflite_model_fp16_file = tflite_models_dir/"12_mnist_Qaware_float16_model.tflite"
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
###Output
INFO:tensorflow:Assets written to: /tmp/tmpwr5239n4/assets
|
week10_ML_competition_pca_kmeans/day3_unsupervised_PCA_kmeans/theory/PCA.ipynb | ###Markdown
PCA - Principal Component Analysis**Problem**: you have a multidimensional set of data (such as a set of hidden unit activations) and you want to see which points are closest to others.PCA allows you to identify the dimensions of greatest variance, to the dimensions of least variance. PCA1 has greatest variance. ExampleLet's look at a dataset that has nothing to do with networks: measurements of flowers, specifically Irises.
###Code
# Cogemos los datos
from sklearn import datasets
iris = datasets.load_iris()
iris.get("feature_names")
import pandas as pd
# Creamos el dataframe
df = pd.DataFrame(data=iris.data, columns=iris.get("feature_names"))
df
df_ = pd.DataFrame(data=iris.data, columns=iris.get("feature_names"))
df_.iloc[90, :]
iris.target
from sklearn.decomposition import PCA
# Dos posibilidades:
# Valor con rango (0, 1) -- Trata de encontrar una dimensión inferior que mantenga el porcentaje de varianza explicativa r^2. Da el mínimo número de pasos necesarios.
# Darle un valor incremental (1 a 1) de 0 a infinito que representa la dimensión a la que queremos llegar.
pca = PCA(n_components=2)
pca.fit(iris.data)
# El primer valor es la varianza explicativa de la mejor proyección de nuestro PCA
pca.explained_variance_ratio_
X = pca.transform(iris.data)
X
df = pd.DataFrame(data=X)
df
X = pca.fit_transform(iris.data)
X
import matplotlib.pyplot as plt
plt.scatter(X[:, 0], X[:, 1], c=iris.target)
plt.scatter(X[:, 0], X[:, 1], c=iris.target)
plt.scatter([X[90][0]], [X[90][1]], s=600, c=["r"], alpha=0.5)
# Ahora ya tendríamos una X reducida y en la que se mantiene un r^2 de 0.92
# 1. Split de datos
# 2. Elijo el modelo y encuentro las mejores features.
# 3. Entreno el modelo (con todos los datos, cross validation con todos los datos o cross validation poquito a poco). Podemos ver el score de validación y sacar conclusiones.
# 4. Sacamos el score de test y vemos si la elección ha sido buena. Si no, vuelvo al punto 2.
# 5. Si nos gusta nuestros resultados, entrenamos el modelo con todos los datos.
# 6. Guardamos el modelo y se lo damos a quien lo necesite.
# 7. Seguimos probando otras opciones.
###Output
_____no_output_____ |
Key Logger/Key Logger.ipynb | ###Markdown
Key Logger Key Logger stores the keystrokes into a file including passwords even browser
###Code
#importing Library
from pynput.keyboard import Listener
#Writing the KeyStrokes in file
def writeoffile(key):
key_data =str(key)
key_data =key_data.replace("'","")
#Cleaning the final file
if key_data == 'Key.space':
key_data = ' '
if key_data == 'Key.shift_r':
key_data = ''
if key_data == "Key.ctrl_l":
key_data = ""
if key_data == "Key.enter":
key_data = "\n"
if key_data == "Key.backspace":
pass
with open('log.txt','a') as f:
f.write(key_data)
#Listening To Keystrokes
with Listener(on_press = writeoffile) as l:
l.join()
###Output
_____no_output_____ |
Invisiblilty.ipynb | ###Markdown
Invisiblility through opencv This code is applicable for red color, For other colors, code can be modified(color value change)
###Code
import cv2 as cv
import numpy as np
capture_video = cv.VideoCapture(0)
return_val , background = capture_video.read()
background = np.flip(background, axis=1)
while True :
_,img = capture_video.read()
img = np.flip(img, axis = 1)
hsv = cv.cvtColor(img, cv.COLOR_BGR2HSV)
lower_red = np.array([100, 40, 40])
upper_red = np.array([100, 255, 255])
mask1 = cv.inRange(hsv, lower_red, upper_red)
lower_red = np.array([155, 40, 40])
upper_red = np.array([180, 255, 255])
mask2 = cv.inRange(hsv, lower_red, upper_red)
mask1 = mask1+mask2
mask1 = cv.morphologyEx(mask1, cv.MORPH_OPEN, np.ones((3, 3), np.uint8), iterations = 2)
mask1 = cv.dilate(mask1, np.ones((3, 3), np.uint8), iterations = 1)
mask2 = cv.bitwise_not(mask1)
res1 = cv.bitwise_and(background,background, mask = mask1)
res2 = cv.bitwise_and(img, img, mask = mask2)
final_output = cv.addWeighted(res1, 1, res2, 1, 0)
cv.imshow("INVISIBLE MAN", final_output)
if cv.waitKey(1)==ord('c') :
break
capture_video.release()
cv.destroyAllWindows()
###Output
_____no_output_____ |
spark_setup_anaconda.ipynb | ###Markdown
This is a little code block which can be added at the top of any jupyter notebook in order to obtain a little Apache Spark 2.4.4 installation inside the jupyter kernel. Currently this code is experimantal but it seems to run just fine.
###Code
!pip install pyspark
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext, SparkSession
from pyspark.sql.types import StructType, StructField, DoubleType, IntegerType, StringType
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.getOrCreate()
###Output
_____no_output_____ |
7-neural-networks-with-minibatch-stochastic-gradient-descent-and-ADAM/Notebook.ipynb | ###Markdown
Assignment7 CS-5891-01 Special Topics Deep Learning Ronald PicardIn this notebook we will walk through the design, training, and testing of neural networks with multiple hidden layers using minibatch stochastic gradient descent with adaptive moments (ADAM). These neural networks will be used for logistic regression, which is an archaic name for binary classification.The binary classification will be performed on images of handwritten numerical digits. More specifically, the last numerical digit of my student ID. This digit happens to be 9. Therefore, the goal of our neural networks will be to output a the value of 1 when the handwritten numerical digit image input is a 9, and 0 in all other cases.The data set we will be using is the MNIST data set. This is a very popular data set amoung the machine learning community. The data set contains 60,000 images, and each image contains a handwritten numerical digit. Each of the images have been provided with a truth label that corresponds to the handwritten digit within the image from the set {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. For our case, we only care about when the image is 9. Therefore we will need to re-label the truth labels so that all truth labels with the value of 9 are given to the value of 1, and all other truth labels are given the value of 0. To start we need to import some needed classes.
###Code
import os
import numpy as np
import struct
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as pyplot
import csv
import time
###Output
_____no_output_____
###Markdown
First, we must change our path string to the path of our data file containing the features. (Please note that you must change this string to point to the directory with the data file on your machine data file on your machine.) Second, we much change the string name of the data files to the names of the MNIST data files. (Please note that you may NOT need to change these. Only change them if your MINST data files are named differently.)
###Code
## path
path = 'C:/Users/computer/OneDrive - Vanderbilt/Vanderbilt_Spring_2019/CS_5891_01_SpecialTopicsDeepLearning/Assignment7/'
#Train data
fname_train_images = os.path.join(path, 'train-images.idx3-ubyte') # the training set image file path
fname_train_labels = os.path.join(path, 'train-labels.idx1-ubyte') # the training set label file path
###Output
_____no_output_____
###Markdown
Next, we retrieve the data from the data files as follows. This imports the data into a feature tensor (3-D matrix) in which each index is a feature matrix corresponding to an image. The label data comes in the form of a vector where each index corresponds to the index of the feature matrix (image) of the feature tensor.
###Code
# open the label file and load it to the "train_labels"
with open(fname_train_labels, 'rb') as flbl:
magic, num = struct.unpack(">II", flbl.read(8))
labels = np.fromfile(flbl, dtype=np.uint8)
# open the image file and load it to the "train_images"
with open(fname_train_images, 'rb') as fimg:
magic, num, rows, cols = struct.unpack(">IIII", fimg.read(16))
images = np.fromfile(fimg, dtype=np.uint8).reshape(len(labels), rows, cols)
print('The training set contains', len(images), 'images') # print the how many images contained in the training set
print('The shape of the image is', images[0].shape) # print the shape of the image
###Output
The training set contains 60000 images
The shape of the image is (28, 28)
###Markdown
Next, we need to perform both two steps; feature scaling and feature normalization. Feature scaling consists of converting the 28 X 28 image matrices into 784 X 1 feature vectors. In essence we will flatten the images out into vectors so that we can use an input a vector to our single neuron. Feature normalization is a process of normalizing the pixel data to between 0 <= x <= 1 (for logistic regression). Each pixel comes on a scale of 0 <= x <= 255. Since 255 is the maximum for every pixel we shall divide each pixel by that number (elementwise) in order to normalize each pixel to between 0 and 1 (inclusive).One additional item we need to take care of is relabeling our label (truth) data so that we have a binary classification in which all 9s are converted to 1s and all other labels are converted to 0s.
###Code
# feature scaling
matrix_side_length = len(images[0])
vector_size = matrix_side_length*matrix_side_length
scaled_images_feature_matrix = []
for image in images:
reshaped_image = np.array(image).reshape((vector_size))
scaled_images_feature_matrix.append(reshaped_image)
# convert to numpy array
scaled_images_feature_matrix = np.transpose(np.array(scaled_images_feature_matrix))
print(scaled_images_feature_matrix.shape) # scaled_images_feature_matrix is a matrix of 60000 X 784
#print(scaled_images_feature_matrix[0].shape)
# feature normilization
normilization_factor = 1/255
normalized_scaled_images_feature_matrix = np.multiply(normilization_factor, scaled_images_feature_matrix)
print(normalized_scaled_images_feature_matrix.shape)
#print(normalized_scaled_images_feature_matrix[0])
# re-label for binary classification
value_for_1 = 9
binary_labels = []
for label in labels:
if(label == 9):
binary_labels.append(1)
else:
binary_labels.append(0)
# convert to numpy array
binary_labels = np.array(binary_labels)
print(len(binary_labels)) # binary_labels is a row vector of 1 X 60000
#print(binary_labels[0])
###Output
(784, 60000)
(784, 60000)
60000
###Markdown
In order to test the efficacy of our neural networks, we need to split up the our label data into two data sets; a smaller and a larger one. The larger set will be the training data that we will use to train our neural networks on. The smaller set will be the testing data that we will used to test the accuracy of our neural nets. The MNIST data set contains 60,000 images. Therefore, we will use 50,000 images for our training data set, and 10,000 images for our testing data set. It is common practice to use a smaller subset of the total data set to debug (ensure it works) and tune hyper-parameters before using the entire time-comsuming data set. This smaller subset is known as a validation set. Therefore, we will first use a validation data set of 600 images. 500 of these images will be used for as our training data set, and the other 100 of these images will be used for our test data set. Thus, we will begin by sifting out a validation set from our total data set.
###Code
# create a data set
size = vector_size
number_of_testing_images = 100
number_of_training_images = 500
number_of_validation_images = number_of_testing_images + number_of_training_images
training_images = []
training_labels = []
testing_images = []
testing_labels = []
factor = 0
for index in range(0, number_of_validation_images):
if(index <= number_of_training_images - 1):
training_images.append(normalized_scaled_images_feature_matrix[:, index + factor])
training_labels.append(binary_labels[index + factor])
else:
testing_images.append(normalized_scaled_images_feature_matrix[:, index + factor])
testing_labels.append(binary_labels[index + factor])
# covert to numpy array
training_images = np.transpose(np.array(training_images))
training_labels = np.array(training_labels)
testing_images = np.transpose(np.array(testing_images))
testing_labels = np.array(testing_labels)
# logger
print(training_images.shape) # validation_training_images is a matrix of 784 X 500
print(training_labels.shape) # validation_testing_labels is a row vector of 1 X 500
print(testing_images.shape) # validation_training_images is a matrix of 784 X 100
print(testing_labels.shape) # validation_testing_labels is a row vector of 1 X 100
###Output
(784, 500)
(500,)
(784, 100)
(100,)
###Markdown
Now we move on to the training of our neural network with multiple hidden layers. Part 1 - Feed Forword:For these neural networks we will use multiple hidden layers with between 5-20 units per layer (neurons). The first layer will have an input of a matrix (784 X number_of_images) of vectorized images of 784 X 1, and will output a matrix ( of units X of images). This matrix will be input into the next hidden layer, which output another matrix ( of units X of images.) The output layer will take and input matrix that is the output matrix of the last hidden layer and will output a row vector of probabilities which we will convert into binary classifications of 0 or 1. (If P(x) >= 0.5 then we will convert it to a 1, otherwise we will convert to 0.) The model for the units of the hidden layers will be a vecorized linear model Z^[l] = W^[l]^T * A^[l-1] + B^[l], where W is a matrix of 5-10 units ( of units) X of units or parameter weights, A is the input matrix of vectorized images (784 X of images) or the output of one of the layers (1 X of units), and B is a row vector of bias's. (Note: in this case, b will be scalar that applied in a broadcasing manner to save on memory.) The output of this model Z^[l] will be a matrix (5-10 units X of images). Z^[1] will be subject to an activation function; which for this assignment will be relu (note: we will test tanh once for comparison). Hidden Layer Activation Function: relu activation function is A^[1] = relu(Z) = max(Z, 0).The model of the output layer will be a vectorized linear model Z^[1] = W^[l] * A^[1] + b^[l] with a single unit. This linear model will be subjected to a sigmoid activation function.The resultant row vector will then be used to calculate the cost function values in an elementwise manner. The cost function for this binary classification will be L(Y_Predicted, Y_Label) = -Y_Label^[l] * Log(A^[l]) - (1-Y_Predicted^[l]) * Log(1-A^[l]), where Y_Label is the True Label, Y_Predicted is the probability value predicted by the neural network, and A is the activation function value. The resultant cost row vector will be added up and divided by the number of elements in order to calculate the average cost.Part 2 - Back Propogation:The back propogation technique that we will use for training the neural network, will be gradient descent. This involves utilizing the gradient of the cost function to updated the model parameters in our layers. In order to calculate the gradient we will utilize the chain rule. The goal of back propogation is the adjust the parameter weights and bias's of our model to accurately perform binary classification. In general the chain rule can be used to find the gradient of the cost function (vecorzied rates of change) with respect to the model parameters. The following is the chain that we will utilize. Generalized Chain Rule for N layers: dL(A^[n], Y)/dW^[l] = ∏(i = n to l) (dl(A^[i], Y)/dz^[i]) * dZ^[i]/dA^[i-1] * dA^[i-1]/dZ^[i-2] *....* dz^[l]/dW^[l];dL(A^[n], Y)/dB^[l] = ∏(i = n to l) (dl(A^[i], Y)/dz^[i]) * dZ^[i]/dA^[i-1] * dA^[i-1]/dZ^[i-2] *....* dz^[l]/dB^[l];Output Layer - Back Propogation:The partial derivative of the cost function with respect to the output layer sigmoid activation function is found by the following:dL(A^[n], Y)/dA^[n] = -Y/A^[n] + (1-A^[n])/(1-A^[n]).Due to the chain rule, the derivative of the cost function with respect to the linear model Z^[n] is found by the following:dL(A^[n], y)/dz = dL(A^[n], y)/dA^[n] * dA^[n]/dZ^[n].The derivative of the sigmoid activation function is da/dz is found by the following:dA^[n]/dZ^[n] = sigma(Z^[n]) * (1-sigma(Z^[n]))Therefore, the derivative of the cost function with respect to the output of the linear model is found by the following:dL(A^[n], Y)/dA^[n] * dA^[n]/dZ^[n]. = (-Y/A^[n] + (1-Y)/(1-A^[n])) * (sigma(Z^[n]) * (1-sigma(Z^[n]))) = A^[n]-Y. (For convienence we will say dZ^[n] = A^[n]-Y.)Now we can extrapolate the chain rule to all the paramters of the linear model our output layer.dL(A^[n], Y)/dW^[n] = dZ^[n] * dZ^[n]/dW^[n] = dZ^[n] * A^[n-1] = A^[n-1] * dZ^[n] (we will change our notation to dW^[n] = A^[n-1] * dZ^[n] for convienence)dL(A^[n], Y)/dB^[n] = dZ^[n] * dZ^[n]/dB^[n] = dZ^[n] (we will change our notation to dW^[n] = dZ^[n] for convienence)Hidden Layers - Back Propagation:dL(A^[n], Y)/dZ^[l] = ∏(i = n to l) (dl(A^[i], Y)/dz^[i]) * dZ^[i]/dA^[i-1] * dA^[i-1]/dZ^[i-2] *....* dA^[l]/dZ^[l]dL(A^[n], Y)/dZ^[l] = dZ^[l+1] * dZ^[l+1]/dA^[l] = W^[l+1] * dz^[l+1] * (element-wise) dA^[1]/dZ^[1]. The reason this is element-wise is because we are propgating from a single neuron to a layer with multiple neurons (we shall rename this dz^[l] = W^[l+1] * dz^[l+1] * (element-wise) dA^[1]/dZ^[1] for conveinience) dA^[1]/dZ^[1] depends on the activation function we are using in the hidden layer (in this case relu): The derivative of relu activation function is dA^[l]/dZ^[l] = if Z^[l] > 0 then 1 else 0.dL(A^[n], Y)/dW^[l] = dZ^[l] * dZ^[l]/dW^[l] = dZ^[l] * X^T (we will change our notation to dW^[l] = dZ^[l] * A[l-1]^T for convienence)dL(A^[n], Y)/dB^[l] = dZ^[l] * dZ^[l]/dB^[l] = dZ^[l] (we will change our notation to dB^[l] = dZ^[l] for convienence)Find vector averages:m = number of imagesdW^[l] = 1/m * (A^[l-1] * dZ^[l])dB^[l] = 1/m * (dZ^[l])Finally, we will update our the weights and bias's of the layers.W^[l]:= W^[l] - alpha * dW^[l]B^[l]:= B^[l] - alpha * dB^[l] The first thing we have have to do is initialize our weights and bias's. There are multiple ways to initialize weights and bias's. Typically we will set our values based on either a uniform distribution between or a normal distribution with a some reasonable mean and standard deviation. There is some flexibility in the initalization of the weights but in general they need to be small (not to small) and varied. The weights need to be different so that the gradients with respect to each other are different. In other words we don't aways want the relative rates of change to be 0. Additionally, we do not want to reach saturation on our output activation function where the gradients are 0 (vanishing gradiants). For this assignment we wills stick with with a uniform random between -.1 and .1. We will also set a random seed each time so that we for our random values to be the same (or similar if there are more layers).
###Code
# initialize weights & bias
np.random.seed(10)
print('Feature Size: ' + str(size))
lower_bound = -.1
upper_bound = .1
#mean = 0.015
#std = 0.005
# hyper-parameters: hidden layers
hidden_layers = 2
units_array = [20, 10]
Weights = []
Bias = []
V_dW = []
V_dB = []
R_dW = []
R_dB = []
for i in range(0, hidden_layers):
if(i == 0):
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], size]))
_B = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], 1]))
_V_dW = np.float64(np.zeros([units_array[i], size]))
_V_dB = np.float64(np.zeros([units_array[i], 1]))
_R_dW = np.float64(np.zeros([units_array[i], size]))
_R_dB = np.float64(np.zeros([units_array[i], 1]))
Weights.append(_W)
Bias.append(_B)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
else:
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], units_array[i-1]]))
_B = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], 1]))
_V_dW = np.float64(np.zeros([units_array[i], units_array[i-1]]))
_V_dB = np.float64(np.zeros([units_array[i], 1]))
_R_dW = np.float64(np.zeros([units_array[i], units_array[i-1]]))
_R_dB = np.float64(np.zeros([units_array[i], 1]))
Weights.append(_W)
Bias.append(_B)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
# output layer
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [1, units_array[i]]))
_b = np.float64(np.random.uniform(lower_bound, upper_bound)) # b will be added in a broadcasting manner
_V_dW = np.float64(np.zeros([1, units_array[i]]))
_V_dB = np.float64(np.zeros(1))
_R_dW = np.float64(np.zeros([1, units_array[i]]))
_R_dB = np.float64(np.zeros(1))
Weights.append(_W)
Bias.append(_b)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
Weights = np.array(Weights)
Bias = np.array(Bias)
V_dW = np.array(V_dW)
V_dB = np.array(V_dB)
R_dW = np.array(R_dW)
R_dB = np.array(R_dB)
for index in range(0, len(Weights) - 1):
Weights[index] = np.where(Weights[index] != 0, Weights[index], np.random.uniform(lower_bound, upper_bound))
#print(train_X.shape)
#print(np.ravel(train_Y).shape)
print('Weights Shape: ' + str(Weights[0].shape)) # matrix with a size of # of units X 784
print('Bias Shape: ' + str(Bias[0].shape)) # vector with a size of the # of unit
print('Velocity Weights Shape: ' + str(V_dW[0].shape)) # matrix with a size of # of units X 784
print('Velocity Bias Shape: ' + str(V_dB[0].shape)) # vector with a size of the # of unit
print('RMSProp Weights Shape: ' + str(R_dW[0].shape)) # matrix with a size of # of units X 784
print('RMSProp Bias Shape: ' + str(R_dB[0].shape)) # vector with a size of the # of unit
###Output
Feature Size: 784
Weights Shape: (20, 784)
Bias Shape: (20, 1)
Velocity Weights Shape: (20, 784)
Velocity Bias Shape: (20, 1)
RMSProp Weights Shape: (20, 784)
RMSProp Bias Shape: (20, 1)
###Markdown
Now we implement our minibatch stochastic gradient descent algorithm. The only difference betwen minibatch stochasic gradient descent and general full-batch gradient descent is that during every epoch (run) we split up our training data into minibatches based on the specified minibatch size. If the data does not evenly split, then the last batch will be smaller; so that we utilize all of the trainig data during every epoch. Then during evey epoch we train once on each minibatch. Since we are training on minibatches, our path the extrema of the function we are tryign to find will not as direct. Therefore, our cost and test accuracy will not decrease every epoch; however, the general trend will be decreasing. Minibatch gradient decense will help prevent us from getting stuck on local extrema, as well as increase the speed at which the code runs.We will also include ADAM in our back propogation. This is a combination of RMSProp and Momentum. Momentum updates the network paramters based on the exponentially weighted moving average of the gradient which helps speed up the convergence rate and prevent the network from getting stuck on local minima. RMSProp penalizes momentum by the exponentially weighted moving average of the square of the gradient which prevents the network from focusing on specific features of the training data, and therefore, helps to prevent overfitting. We will start by setting both of our momentum and RMSProp hyperparamters to .9 with a learning rate of 0.1 since this provides good results.We will also collect data on the accuracy of our networks as a function of training iterations. To do this we will need to find the number of inaccuracate binary classifications (false positives & false negatives). This will be acommplished using our test data set. We will send our test data set through the network and compare the results with the true labels of the test data set.
###Code
# gradient descent
detailed_logger = False
main_logger = True
main_logger_output_epochs = 100
L2 = False
Dropout = False
momentum = False
adam = True
hidden_layer_relu = True
hidden_layer_tanh = False
hidden_layer_sigmoid = False
# hyber-parameters
alpha = .1;
epsilon = .85
keep_prob = .9
number_of_epochs = 500
batch_size = 50
momentum_coef = .9
RMSProp_coef = .9
epsilon = 1e-20
t = 0
# copy initalization
W = Weights.copy()
B = Bias.copy()
# data arrays
cost_array = []
accuracy_array = []
interation_array = []
# rename
X_train = np.float64(training_images).copy()
Y_train = np.float64(training_labels).copy()
X_test = np.float64(testing_images).copy()
Y_test = np.float64(testing_labels).copy()
#m = size
m = number_of_training_images
def model(W, B, A):
return np.dot(W, A) + B
def activation_relu(Z):
Z = np.where(~np.isnan(Z), Z, 0)
Z = np.where(~np.isinf(Z), Z, 0)
return np.where(Z > 0, Z, 0)
def activation_tanh(Z):
return np.tanh(Z)
def activation_sigmoid(Z):
return 1/(1 + np.exp(-Z))
def loss(A, Y):
epsilon = 1e-20
return np.where((Y == 1), np.multiply(-Y, np.log(A + epsilon)), -np.multiply((1 - Y), np.log(1 - A + epsilon)))
#return np.multiply(-Y, np.log(A)) - np.multiply((1 - Y), np.log(1 - A))
def cost(L):
return np.multiply(1/L.shape[1], np.sum(L))
def cost_L2(L, W, epsilon):
L2 = np.multiply(epsilon/(2*W.shape[1]), np.multiply(W[len(W)-3], W[len(W)-3]).sum() + np.multiply(W[len(W)-2], W[len(W)-2]).sum() + np.multiply(W[len(W)-1], W[len(W)-1]).sum())
J = cost(L)
return L2 + J
def prediction(A):
return np.where(A >= 0.5, 1, 0)
def accuracy(prediction, Y):
return 100 - np.multiply(100/Y.shape[0], np.sum(np.absolute(Y - prediction)))
def forward_propagation_return_layers(W, B, A, A_layers, Z_layers, layer, D, keep_prob):
if(layer < len(W) - 1):
Z = model(W[layer], B[layer], A)
Z_layers.append(Z)
if(hidden_layer_relu == True):
A = activation_relu(Z)
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
if(Dropout == True):
_D = np.float64(np.where(np.random.uniform(0, 1, A.shape) < keep_prob, 1, 0))
D.append(_D)
A = np.multiply(A, _D)
A_layers.append(A)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Training Data: ' + str(layer))
A_layers, Z_layers, D = forward_propagation_return_layers(W, B, A, A_layers, Z_layers, layer, D, keep_prob)
elif(layer == len(W) - 1):
Z = model(W[layer], B[layer], A)
Z_layers.append(Z)
A = activation_sigmoid(Z)
if(Dropout == True):
_D = np.float64(np.where(np.random.uniform(0, 1, A.shape) < keep_prob, 1, 0))
D.append(_D)
A = np.multiply(A, _D)
A_layers.append(A)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Training Data: ' + str(layer))
print('Forward Propagation Training Data Complete')
return A_layers, Z_layers, D
def forward_propagation(W, B, A, layer):
if(layer < len(W) - 1):
Z = model(W[layer], B[layer], A)
if(hidden_layer_relu == True):
A = activation_relu(Z)
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Testing Data: ' + str(layer))
A = forward_propagation(W, B, A, layer)
elif(layer == len(W) - 1):
Z = model(W[layer], B[layer], A)
A = activation_sigmoid(Z)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Testing Data: ' + str(layer))
print('Forward Propagation Testing Data Complete')
return A
def dZ(dZ, W, Z):
Z = np.where(~np.isnan(Z), Z, 0)
W = np.where(~np.isnan(W), W, 0)
dZ = np.where(~np.isnan(dZ), dZ, 0)
Z = np.where(~np.isinf(Z), Z, 0)
W = np.where(~np.isinf(W), W, 0)
dZ = np.where(~np.isinf(dZ), dZ, 0)
if(hidden_layer_relu == True):
return np.multiply(np.dot(np.transpose(W), dZ), np.where(Z > 0, 1, 0))
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
return np.multiply(np.dot(np.transpose(W), dZ), 1- np.multiply(A, A))
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
return np.multiply(np.dot(np.transpose(W), dZ), np.multiply(A, (1-A)))
def dW(dZ, A):
return np.multiply(1/dZ.shape[1], np.dot(dZ, np.transpose(A)))
def dW_L2(dZ, A, W, epsilon):
return np.multiply(epsilon/Z.shape[1], W) + dW(dZ, A)
def dB(dZ):
return np.multiply(1/dZ.shape[1], np.sum(dZ))
def backward_propagation(W, B, Y, A_layers, Z_layers, _dZ, alpha, epsilon, layer, D, V_dW, V_dB, R_dW, R_dB, t):
if(layer >= 0):
if(layer == len(W) - 1):
_dZ = A_layers[layer+1] - Y
elif(layer >= 0):
_dZ = dZ(_dZ, W[layer+1], Z_layers[layer])
if(Dropout == True):
_dZ = np.multiply(_dZ, D[layer])
if(L2 == True):
_dW = dW_L2(_dZ, A_layers[layer], W[layer], epsilon)
else:
_dW = dW(_dZ, A_layers[layer])
_dB = dB(_dZ)
if(adam == True):
epsilon = 1e-6
# ADAM - RMSProp + Momentum
V_dW[layer] = np.multiply(momentum_coef, V_dW[layer]) + np.multiply(1-momentum_coef, _dW)
V_dB[layer] = np.multiply(momentum_coef, V_dB[layer]) + np.multiply(1-momentum_coef, _dB)
R_dW[layer] = np.multiply(RMSProp_coef, R_dW[layer]) + np.multiply(1-RMSProp_coef, np.multiply(_dW, _dW))
R_dB[layer] = np.multiply(RMSProp_coef, R_dB[layer]) + np.multiply(1-RMSProp_coef, np.multiply(_dB, _dB))
# index decay in bias correction
t = t + 1
# correct bias for initial rounds
V_dW[layer] = np.multiply(V_dW[layer], 1/(1-np.power(momentum_coef, t)))
V_dB[layer] = np.multiply(V_dB[layer], 1/(1-np.power(momentum_coef, t)))
R_dW[layer] = np.multiply(R_dW[layer], 1/(1-np.power(RMSProp_coef, t)))
R_dB[layer] = np.multiply(R_dB[layer], 1/(1-np.power(RMSProp_coef, t)))
val1 = 1/(np.sqrt(R_dW[layer])+ epsilon)
val2 = 1/(np.sqrt(R_dB[layer])+ epsilon)
W[layer] = W[layer] - np.multiply(alpha, np.multiply(V_dW[layer], val1 ))
B[layer] = B[layer] - np.multiply(alpha, np.multiply(V_dB[layer], val2 ))
elif(momentum == True):
V_dW[layer] = np.multiply(momentum_coef, V_dW[layer]) + np.multiply(alpha, _dW)
V_dB[layer] = np.multiply(momentum_coef, V_dB[layer]) + np.multiply(alpha, _dB)
W[layer] = W[layer] - V_dW[layer]
B[layer] = B[layer] - V_dB[layer]
else:
W[layer] = W[layer] - np.multiply(alpha, _dW)
B[layer] = B[layer] - np.multiply(alpha, _dB)
if(detailed_logger == True):
print('Backward Layer: ' + str(layer))
layer = layer - 1
W, B, t = backward_propagation(W, B, Y, A_layers, Z_layers, _dZ, alpha, epsilon, layer, D, V_dW, V_dB, R_dW, R_dB, t)
if(detailed_logger == True):
print('Backward Propagation Complete')
return W, B, t
def shuffle(X, Y, number_of_training_images):
random_array = np.random.permutation(np.arange(number_of_training_images))
return X[:, random_array], Y[random_array]
start_time = time.time()
# main loop
for epoch in range(1, number_of_epochs):
# logger
if(main_logger == True and epoch % main_logger_output_epochs == 0):
print('Main Loop Epoch: ' + str(epoch))
# saftey check
if(adam == True and momentum == True):
print("ERROR! Please Select Either Adam OR Momentum OR Neither, Not Both.")
break
# saftey check
if(hidden_layer_relu + hidden_layer_tanh + hidden_layer_sigmoid != 1):
print("ERROR! Please Select Only 1 Hidden Layer Activation Function")
break
# shuffle data
X, Y = shuffle(X_train.copy(), Y_train.copy(), number_of_training_images)
number_of_batches = int(np.floor(number_of_training_images/batch_size))
split_index = number_of_batches*batch_size
# parse into minibatches
X_minibatches = np.split(X[:, 0:split_index], number_of_batches, axis=1)
if not(split_index == number_of_training_images):
X_left_over_portion = X[:, split_index:number_of_training_images]
X_minibatches.append(X_left_over_portion)
Y_minibatches = np.split(Y[0:split_index], number_of_batches, axis=0)
if not(split_index == number_of_training_images):
Y_left_over_portion = Y[split_index:number_of_training_images]
Y_minibatches.append(Y_left_over_portion)
number_of_minibatches = len(Y_minibatches)
# logger
if(main_logger == True and epoch % main_logger_output_epochs == 0):
print('Number Of Minibatches: ' + str(number_of_minibatches))
for index in range(0, number_of_minibatches-1):
X_minibatch = X_minibatches[index]
Y_minibatch = Y_minibatches[index]
# forward propogation training data set
A_layers, Z_layers, D = forward_propagation_return_layers(W, B, X_minibatch, [X_minibatch], [], 0, [], keep_prob)
L = loss(A_layers[len(A_layers) - 1], Y_minibatch)
if(L2 == True):
C = cost_L2(L, W, epsilon)
else:
C = cost(L)
# backpropogation
W, B, t = backward_propagation(W, B, Y_minibatch, A_layers, Z_layers, 0, alpha, epsilon, len(W) - 1, D, V_dW, V_dB, R_dW, R_dB, t)
if(epoch % main_logger_output_epochs == 0):
print('Cost: ' + str(C))
# forward propogation test data set
A_test = forward_propagation(W, B, X_test, 0)
# accuracy
_prediction = prediction(A_test)
_accuracy = accuracy(_prediction, Y_test)
# storage for plotting
cost_array.append(C)
accuracy_array.append(_accuracy)
interation_array.append(epoch)
end_time = time.time()
run_time = end_time - start_time
print('')
print('Results:')
print('')
print('')
print('Run Time: ' + str(run_time) + ' seconds')
print('Cost: ' + str(C))
print('Accuracy: ' + str(_accuracy) + ' %')
print('')
print('')
pyplot.figure()
pyplot.plot(interation_array, cost_array, 'red')
pyplot.title('Learning Curve - ' + str(len(X[0])) + ' Training Data Set (Relu Hidden Layer)')
pyplot.xlabel('Epochs')
pyplot.ylabel('Cost')
pyplot.show()
# plot percent accuracy curve
pyplot.figure()
pyplot.plot(interation_array, accuracy_array, 'red')
pyplot.title('Percent Accuracy Curve - ' + str(len(X_test[0])) + ' Test Data Set (Relu Hidden Layer)')
pyplot.xlabel('Epochs')
pyplot.ylabel('Percent Accuracy')
pyplot.show()
###Output
Main Loop Epoch: 100
Number Of Minibatches: 10
Cost: 0.1101922786244274
Main Loop Epoch: 200
Number Of Minibatches: 10
Cost: 0.017543898313142785
Main Loop Epoch: 300
Number Of Minibatches: 10
Cost: 0.05816910301022392
Main Loop Epoch: 400
Number Of Minibatches: 10
Cost: 0.05842772204814277
Results:
Run Time: 6.019030570983887 seconds
Cost: 0.05483394259959518
Accuracy: 94.0 %
###Markdown
As shown, our validation set worked, so now we can move on to the full data set, and begin our evaluation and exploration.First, we need to split up our full data set into testing and training data. We will use 50,000 images as the training data set and 10,000 images as the testing data set.
###Code
# create a data set
size = vector_size
number_of_testing_images = 10000
number_of_training_images = 50000
number_of_validation_images = number_of_testing_images + number_of_training_images
training_images = []
training_labels = []
testing_images = []
testing_labels = []
factor = 0
for index in range(0, number_of_validation_images):
if(index <= number_of_training_images - 1):
training_images.append(normalized_scaled_images_feature_matrix[:, index + factor])
training_labels.append(binary_labels[index + factor])
else:
testing_images.append(normalized_scaled_images_feature_matrix[:, index + factor])
testing_labels.append(binary_labels[index + factor])
# covert to numpy array
training_images = np.transpose(np.array(training_images))
training_labels = np.array(training_labels)
testing_images = np.transpose(np.array(testing_images))
testing_labels = np.array(testing_labels)
# logger
print(training_images.shape) # validation_training_images is a matrix of 784 X 500
print(training_labels.shape) # validation_testing_labels is a row vector of 1 X 500
print(testing_images.shape) # validation_training_images is a matrix of 784 X 100
print(testing_labels.shape) # validation_testing_labels is a row vector of 1 X 100
###Output
(784, 50000)
(50000,)
(784, 10000)
(10000,)
###Markdown
Now we must reset our weights and bias's.
###Code
# initialize weights & bias
np.random.seed(10)
print('Feature Size: ' + str(size))
lower_bound = -.1
upper_bound = .1
#mean = 0.015
#std = 0.005
# hyper-parameters: hidden layers
hidden_layers = 2
units_array = [20, 10]
Weights = []
Bias = []
V_dW = []
V_dB = []
R_dW = []
R_dB = []
for i in range(0, hidden_layers):
if(i == 0):
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], size]))
_B = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], 1]))
_V_dW = np.float64(np.zeros([units_array[i], size]))
_V_dB = np.float64(np.zeros([units_array[i], 1]))
_R_dW = np.float64(np.zeros([units_array[i], size]))
_R_dB = np.float64(np.zeros([units_array[i], 1]))
Weights.append(_W)
Bias.append(_B)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
else:
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], units_array[i-1]]))
_B = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], 1]))
_V_dW = np.float64(np.zeros([units_array[i], units_array[i-1]]))
_V_dB = np.float64(np.zeros([units_array[i], 1]))
_R_dW = np.float64(np.zeros([units_array[i], units_array[i-1]]))
_R_dB = np.float64(np.zeros([units_array[i], 1]))
Weights.append(_W)
Bias.append(_B)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
# output layer
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [1, units_array[i]]))
_b = np.float64(np.random.uniform(lower_bound, upper_bound)) # b will be added in a broadcasting manner
_V_dW = np.float64(np.zeros([1, units_array[i]]))
_V_dB = np.float64(np.zeros(1))
_R_dW = np.float64(np.zeros([1, units_array[i]]))
_R_dB = np.float64(np.zeros(1))
Weights.append(_W)
Bias.append(_b)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
Weights = np.array(Weights)
Bias = np.array(Bias)
V_dW = np.array(V_dW)
V_dB = np.array(V_dB)
R_dW = np.array(R_dW)
R_dB = np.array(R_dB)
for index in range(0, len(Weights) - 1):
Weights[index] = np.where(Weights[index] != 0, Weights[index], np.random.uniform(lower_bound, upper_bound))
#print(train_X.shape)
#print(np.ravel(train_Y).shape)
print('Weights Shape: ' + str(Weights[0].shape)) # matrix with a size of # of units X 784
print('Bias Shape: ' + str(Bias[0].shape)) # vector with a size of the # of unit
print('Velocity Weights Shape: ' + str(V_dW[0].shape)) # matrix with a size of # of units X 784
print('Velocity Bias Shape: ' + str(V_dB[0].shape)) # vector with a size of the # of unit
print('RMSProp Weights Shape: ' + str(R_dW[0].shape)) # matrix with a size of # of units X 784
print('RMSProp Bias Shape: ' + str(R_dB[0].shape)) # vector with a size of the # of unit
###Output
Feature Size: 784
Weights Shape: (20, 784)
Bias Shape: (20, 1)
Velocity Weights Shape: (20, 784)
Velocity Bias Shape: (20, 1)
RMSProp Weights Shape: (20, 784)
RMSProp Bias Shape: (20, 1)
###Markdown
Now we re-run minibatch stochastic gradient descent with ADAM on the full data. We will first utilize minibatches of 50 each.
###Code
# gradient descent
detailed_logger = False
main_logger = True
main_logger_output_epochs = 100
L2 = False
Dropout = False
momentum = False
adam = True
hidden_layer_relu = True
hidden_layer_tanh = False
hidden_layer_sigmoid = False
# hyber-parameters
alpha = .01;
epsilon = .85
keep_prob = .9
number_of_epochs = 500
batch_size = 50
momentum_coef = .9
RMSProp_coef = .9
epsilon = 1e-20
t = 0
# copy initalization
W = Weights.copy()
B = Bias.copy()
# data arrays
cost_array = []
accuracy_array = []
interation_array = []
# rename
X_train = np.float64(training_images).copy()
Y_train = np.float64(training_labels).copy()
X_test = np.float64(testing_images).copy()
Y_test = np.float64(testing_labels).copy()
#m = size
m = number_of_training_images
def model(W, B, A):
return np.dot(W, A) + B
def activation_relu(Z):
Z = np.where(~np.isnan(Z), Z, 0)
Z = np.where(~np.isinf(Z), Z, 0)
return np.where(Z > 0, Z, 0)
def activation_tanh(Z):
return np.tanh(Z)
def activation_sigmoid(Z):
return 1/(1 + np.exp(-Z))
def loss(A, Y):
epsilon = 1e-20
return np.where((Y == 1), np.multiply(-Y, np.log(A + epsilon)), -np.multiply((1 - Y), np.log(1 - A + epsilon)))
#return np.multiply(-Y, np.log(A)) - np.multiply((1 - Y), np.log(1 - A))
def cost(L):
return np.multiply(1/L.shape[1], np.sum(L))
def cost_L2(L, W, epsilon):
L2 = np.multiply(epsilon/(2*W.shape[1]), np.multiply(W[len(W)-3], W[len(W)-3]).sum() + np.multiply(W[len(W)-2], W[len(W)-2]).sum() + np.multiply(W[len(W)-1], W[len(W)-1]).sum())
J = cost(L)
return L2 + J
def prediction(A):
return np.where(A >= 0.5, 1, 0)
def accuracy(prediction, Y):
return 100 - np.multiply(100/Y.shape[0], np.sum(np.absolute(Y - prediction)))
def forward_propagation_return_layers(W, B, A, A_layers, Z_layers, layer, D, keep_prob):
if(layer < len(W) - 1):
Z = model(W[layer], B[layer], A)
Z_layers.append(Z)
if(hidden_layer_relu == True):
A = activation_relu(Z)
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
if(Dropout == True):
_D = np.float64(np.where(np.random.uniform(0, 1, A.shape) < keep_prob, 1, 0))
D.append(_D)
A = np.multiply(A, _D)
A_layers.append(A)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Training Data: ' + str(layer))
A_layers, Z_layers, D = forward_propagation_return_layers(W, B, A, A_layers, Z_layers, layer, D, keep_prob)
elif(layer == len(W) - 1):
Z = model(W[layer], B[layer], A)
Z_layers.append(Z)
A = activation_sigmoid(Z)
if(Dropout == True):
_D = np.float64(np.where(np.random.uniform(0, 1, A.shape) < keep_prob, 1, 0))
D.append(_D)
A = np.multiply(A, _D)
A_layers.append(A)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Training Data: ' + str(layer))
print('Forward Propagation Training Data Complete')
return A_layers, Z_layers, D
def forward_propagation(W, B, A, layer):
if(layer < len(W) - 1):
Z = model(W[layer], B[layer], A)
if(hidden_layer_relu == True):
A = activation_relu(Z)
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Testing Data: ' + str(layer))
A = forward_propagation(W, B, A, layer)
elif(layer == len(W) - 1):
Z = model(W[layer], B[layer], A)
A = activation_sigmoid(Z)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Testing Data: ' + str(layer))
print('Forward Propagation Testing Data Complete')
return A
def dZ(dZ, W, Z):
Z = np.where(~np.isnan(Z), Z, 0)
W = np.where(~np.isnan(W), W, 0)
dZ = np.where(~np.isnan(dZ), dZ, 0)
Z = np.where(~np.isinf(Z), Z, 0)
W = np.where(~np.isinf(W), W, 0)
dZ = np.where(~np.isinf(dZ), dZ, 0)
if(hidden_layer_relu == True):
return np.multiply(np.dot(np.transpose(W), dZ), np.where(Z > 0, 1, 0))
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
return np.multiply(np.dot(np.transpose(W), dZ), 1- np.multiply(A, A))
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
return np.multiply(np.dot(np.transpose(W), dZ), np.multiply(A, (1-A)))
def dW(dZ, A):
return np.multiply(1/dZ.shape[1], np.dot(dZ, np.transpose(A)))
def dW_L2(dZ, A, W, epsilon):
return np.multiply(epsilon/Z.shape[1], W) + dW(dZ, A)
def dB(dZ):
return np.multiply(1/dZ.shape[1], np.sum(dZ))
def backward_propagation(W, B, Y, A_layers, Z_layers, _dZ, alpha, epsilon, layer, D, V_dW, V_dB, R_dW, R_dB, t):
if(layer >= 0):
if(layer == len(W) - 1):
_dZ = A_layers[layer+1] - Y
elif(layer >= 0):
_dZ = dZ(_dZ, W[layer+1], Z_layers[layer])
if(Dropout == True):
_dZ = np.multiply(_dZ, D[layer])
if(L2 == True):
_dW = dW_L2(_dZ, A_layers[layer], W[layer], epsilon)
else:
_dW = dW(_dZ, A_layers[layer])
_dB = dB(_dZ)
if(adam == True):
epsilon = 1e-6
# ADAM - RMSProp + Momentum
V_dW[layer] = np.multiply(momentum_coef, V_dW[layer]) + np.multiply(1-momentum_coef, _dW)
V_dB[layer] = np.multiply(momentum_coef, V_dB[layer]) + np.multiply(1-momentum_coef, _dB)
R_dW[layer] = np.multiply(RMSProp_coef, R_dW[layer]) + np.multiply(1-RMSProp_coef, np.multiply(_dW, _dW))
R_dB[layer] = np.multiply(RMSProp_coef, R_dB[layer]) + np.multiply(1-RMSProp_coef, np.multiply(_dB, _dB))
# index decay in bias correction
t = t + 1
# correct bias for initial rounds
V_dW[layer] = np.multiply(V_dW[layer], 1/(1-np.power(momentum_coef, t)))
V_dB[layer] = np.multiply(V_dB[layer], 1/(1-np.power(momentum_coef, t)))
R_dW[layer] = np.multiply(R_dW[layer], 1/(1-np.power(RMSProp_coef, t)))
R_dB[layer] = np.multiply(R_dB[layer], 1/(1-np.power(RMSProp_coef, t)))
val1 = 1/(np.sqrt(R_dW[layer])+ epsilon)
val2 = 1/(np.sqrt(R_dB[layer])+ epsilon)
W[layer] = W[layer] - np.multiply(alpha, np.multiply(V_dW[layer], val1 ))
B[layer] = B[layer] - np.multiply(alpha, np.multiply(V_dB[layer], val2 ))
elif(momentum == True):
V_dW[layer] = np.multiply(momentum_coef, V_dW[layer]) + np.multiply(alpha, _dW)
V_dB[layer] = np.multiply(momentum_coef, V_dB[layer]) + np.multiply(alpha, _dB)
W[layer] = W[layer] - V_dW[layer]
B[layer] = B[layer] - V_dB[layer]
else:
W[layer] = W[layer] - np.multiply(alpha, _dW)
B[layer] = B[layer] - np.multiply(alpha, _dB)
if(detailed_logger == True):
print('Backward Layer: ' + str(layer))
layer = layer - 1
W, B, t = backward_propagation(W, B, Y, A_layers, Z_layers, _dZ, alpha, epsilon, layer, D, V_dW, V_dB, R_dW, R_dB, t)
if(detailed_logger == True):
print('Backward Propagation Complete')
return W, B, t
def shuffle(X, Y, number_of_training_images):
random_array = np.random.permutation(np.arange(number_of_training_images))
return X[:, random_array], Y[random_array]
start_time = time.time()
# main loop
for epoch in range(1, number_of_epochs):
# logger
if(main_logger == True and epoch % main_logger_output_epochs == 0):
print('Main Loop Epoch: ' + str(epoch))
# saftey check
if(adam == True and momentum == True):
print("ERROR! Please Select Either Adam OR Momentum OR Neither, Not Both.")
break
# saftey check
if(hidden_layer_relu + hidden_layer_tanh + hidden_layer_sigmoid != 1):
print("ERROR! Please Select Only 1 Hidden Layer Activation Function")
break
# shuffle data
X, Y = shuffle(X_train.copy(), Y_train.copy(), number_of_training_images)
number_of_batches = int(np.floor(number_of_training_images/batch_size))
split_index = number_of_batches*batch_size
# parse into minibatches
X_minibatches = np.split(X[:, 0:split_index], number_of_batches, axis=1)
if not(split_index == number_of_training_images):
X_left_over_portion = X[:, split_index:number_of_training_images]
X_minibatches.append(X_left_over_portion)
Y_minibatches = np.split(Y[0:split_index], number_of_batches, axis=0)
if not(split_index == number_of_training_images):
Y_left_over_portion = Y[split_index:number_of_training_images]
Y_minibatches.append(Y_left_over_portion)
number_of_minibatches = len(Y_minibatches)
# logger
if(main_logger == True and epoch % main_logger_output_epochs == 0):
print('Number Of Minibatches: ' + str(number_of_minibatches))
for index in range(0, number_of_minibatches-1):
X_minibatch = X_minibatches[index]
Y_minibatch = Y_minibatches[index]
# forward propogation training data set
A_layers, Z_layers, D = forward_propagation_return_layers(W, B, X_minibatch, [X_minibatch], [], 0, [], keep_prob)
L = loss(A_layers[len(A_layers) - 1], Y_minibatch)
if(L2 == True):
C = cost_L2(L, W, epsilon)
else:
C = cost(L)
# backpropogation
W, B, t = backward_propagation(W, B, Y_minibatch, A_layers, Z_layers, 0, alpha, epsilon, len(W) - 1, D, V_dW, V_dB, R_dW, R_dB, t)
if(epoch % main_logger_output_epochs == 0):
print('Cost: ' + str(C))
# forward propogation test data set
A_test = forward_propagation(W, B, X_test, 0)
# accuracy
_prediction = prediction(A_test)
_accuracy = accuracy(_prediction, Y_test)
# storage for plotting
cost_array.append(C)
accuracy_array.append(_accuracy)
interation_array.append(epoch)
end_time = time.time()
run_time = end_time - start_time
print('')
print('Results:')
print('')
print('')
print('Run Time: ' + str(run_time) + ' seconds')
print('Cost: ' + str(C))
print('Accuracy: ' + str(_accuracy) + ' %')
print('')
print('')
pyplot.figure()
pyplot.plot(interation_array, cost_array, 'red')
pyplot.title('Learning Curve - ' + str(len(X[0])) + ' Training Data Set (Relu Hidden Layer)')
pyplot.xlabel('Epochs')
pyplot.ylabel('Cost')
pyplot.show()
# plot percent accuracy curve
pyplot.figure()
pyplot.plot(interation_array, accuracy_array, 'red')
pyplot.title('Percent Accuracy Curve - ' + str(len(X_test[0])) + ' Test Data Set (Relu Hidden Layer)')
pyplot.xlabel('Epochs')
pyplot.ylabel('Percent Accuracy')
pyplot.show()
###Output
Main Loop Epoch: 100
Number Of Minibatches: 1000
Cost: 6.328639783644706e-08
Main Loop Epoch: 200
Number Of Minibatches: 1000
Cost: 0.011801970848913768
Main Loop Epoch: 300
Number Of Minibatches: 1000
Cost: 0.021871124168560296
Main Loop Epoch: 400
Number Of Minibatches: 1000
Cost: 0.0
Results:
Run Time: 1264.1851885318756 seconds
Cost: 0.0
Accuracy: 98.36 %
###Markdown
As illustrated the after 500 epochs with minibatches of 50 the cost became approximately 0.0 (or too low for python to estimate) and the test data accuracy reached 98.36%. These results are very good. The test accuracy is high because minibatch stochastic gradient descent inately provides a form of regularization, combined with the ADAM (momentum and RMSProp) which prevents us from getting stuck on local minima, and from focusing too much on specific features based on large gradients. It is important to note that having a cost of zero usually means we have overfit the training data; however, in this senario that doesn't appear to be the case since we still have a very high test accuracy.We now wish to explore the impact of adjusting the momentum hyper-parameter size for Adam. We will re-run the algorithm with smaller momentum hyper-paramter of .5 see what the results we achieve.First we reinitialize our weights and bias's.
###Code
# initialize weights & bias
np.random.seed(10)
print('Feature Size: ' + str(size))
lower_bound = -.1
upper_bound = .1
#mean = 0.015
#std = 0.005
# hyper-parameters: hidden layers
hidden_layers = 2
units_array = [20, 10]
Weights = []
Bias = []
V_dW = []
V_dB = []
R_dW = []
R_dB = []
for i in range(0, hidden_layers):
if(i == 0):
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], size]))
_B = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], 1]))
_V_dW = np.float64(np.zeros([units_array[i], size]))
_V_dB = np.float64(np.zeros([units_array[i], 1]))
_R_dW = np.float64(np.zeros([units_array[i], size]))
_R_dB = np.float64(np.zeros([units_array[i], 1]))
Weights.append(_W)
Bias.append(_B)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
else:
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], units_array[i-1]]))
_B = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], 1]))
_V_dW = np.float64(np.zeros([units_array[i], units_array[i-1]]))
_V_dB = np.float64(np.zeros([units_array[i], 1]))
_R_dW = np.float64(np.zeros([units_array[i], units_array[i-1]]))
_R_dB = np.float64(np.zeros([units_array[i], 1]))
Weights.append(_W)
Bias.append(_B)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
# output layer
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [1, units_array[i]]))
_b = np.float64(np.random.uniform(lower_bound, upper_bound)) # b will be added in a broadcasting manner
_V_dW = np.float64(np.zeros([1, units_array[i]]))
_V_dB = np.float64(np.zeros(1))
_R_dW = np.float64(np.zeros([1, units_array[i]]))
_R_dB = np.float64(np.zeros(1))
Weights.append(_W)
Bias.append(_b)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
Weights = np.array(Weights)
Bias = np.array(Bias)
V_dW = np.array(V_dW)
V_dB = np.array(V_dB)
R_dW = np.array(R_dW)
R_dB = np.array(R_dB)
for index in range(0, len(Weights) - 1):
Weights[index] = np.where(Weights[index] != 0, Weights[index], np.random.uniform(lower_bound, upper_bound))
#print(train_X.shape)
#print(np.ravel(train_Y).shape)
print('Weights Shape: ' + str(Weights[0].shape)) # matrix with a size of # of units X 784
print('Bias Shape: ' + str(Bias[0].shape)) # vector with a size of the # of unit
print('Velocity Weights Shape: ' + str(V_dW[0].shape)) # matrix with a size of # of units X 784
print('Velocity Bias Shape: ' + str(V_dB[0].shape)) # vector with a size of the # of unit
print('RMSProp Weights Shape: ' + str(R_dW[0].shape)) # matrix with a size of # of units X 784
print('RMSProp Bias Shape: ' + str(R_dB[0].shape)) # vector with a size of the # of unit
###Output
Feature Size: 784
Weights Shape: (20, 784)
Bias Shape: (20, 1)
Velocity Weights Shape: (20, 784)
Velocity Bias Shape: (20, 1)
RMSProp Weights Shape: (20, 784)
RMSProp Bias Shape: (20, 1)
###Markdown
Now we re-run our minibatch stochastic gradient descent algorithm with ADAM.
###Code
# gradient descent
detailed_logger = False
main_logger = True
main_logger_output_epochs = 100
L2 = False
Dropout = False
momentum = False
adam = True
hidden_layer_relu = True
hidden_layer_tanh = False
hidden_layer_sigmoid = False
# hyber-parameters
alpha = .01;
epsilon = .85
keep_prob = .9
number_of_epochs = 500
batch_size = 50
momentum_coef = .5
RMSProp_coef = .9
epsilon = 1e-20
t = 0
# copy initalization
W = Weights.copy()
B = Bias.copy()
# data arrays
cost_array = []
accuracy_array = []
interation_array = []
# rename
X_train = np.float64(training_images).copy()
Y_train = np.float64(training_labels).copy()
X_test = np.float64(testing_images).copy()
Y_test = np.float64(testing_labels).copy()
#m = size
m = number_of_training_images
def model(W, B, A):
return np.dot(W, A) + B
def activation_relu(Z):
Z = np.where(~np.isnan(Z), Z, 0)
Z = np.where(~np.isinf(Z), Z, 0)
return np.where(Z > 0, Z, 0)
def activation_tanh(Z):
return np.tanh(Z)
def activation_sigmoid(Z):
return 1/(1 + np.exp(-Z))
def loss(A, Y):
epsilon = 1e-20
return np.where((Y == 1), np.multiply(-Y, np.log(A + epsilon)), -np.multiply((1 - Y), np.log(1 - A + epsilon)))
#return np.multiply(-Y, np.log(A)) - np.multiply((1 - Y), np.log(1 - A))
def cost(L):
return np.multiply(1/L.shape[1], np.sum(L))
def cost_L2(L, W, epsilon):
L2 = np.multiply(epsilon/(2*W.shape[1]), np.multiply(W[len(W)-3], W[len(W)-3]).sum() + np.multiply(W[len(W)-2], W[len(W)-2]).sum() + np.multiply(W[len(W)-1], W[len(W)-1]).sum())
J = cost(L)
return L2 + J
def prediction(A):
return np.where(A >= 0.5, 1, 0)
def accuracy(prediction, Y):
return 100 - np.multiply(100/Y.shape[0], np.sum(np.absolute(Y - prediction)))
def forward_propagation_return_layers(W, B, A, A_layers, Z_layers, layer, D, keep_prob):
if(layer < len(W) - 1):
Z = model(W[layer], B[layer], A)
Z_layers.append(Z)
if(hidden_layer_relu == True):
A = activation_relu(Z)
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
if(Dropout == True):
_D = np.float64(np.where(np.random.uniform(0, 1, A.shape) < keep_prob, 1, 0))
D.append(_D)
A = np.multiply(A, _D)
A_layers.append(A)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Training Data: ' + str(layer))
A_layers, Z_layers, D = forward_propagation_return_layers(W, B, A, A_layers, Z_layers, layer, D, keep_prob)
elif(layer == len(W) - 1):
Z = model(W[layer], B[layer], A)
Z_layers.append(Z)
A = activation_sigmoid(Z)
if(Dropout == True):
_D = np.float64(np.where(np.random.uniform(0, 1, A.shape) < keep_prob, 1, 0))
D.append(_D)
A = np.multiply(A, _D)
A_layers.append(A)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Training Data: ' + str(layer))
print('Forward Propagation Training Data Complete')
return A_layers, Z_layers, D
def forward_propagation(W, B, A, layer):
if(layer < len(W) - 1):
Z = model(W[layer], B[layer], A)
if(hidden_layer_relu == True):
A = activation_relu(Z)
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Testing Data: ' + str(layer))
A = forward_propagation(W, B, A, layer)
elif(layer == len(W) - 1):
Z = model(W[layer], B[layer], A)
A = activation_sigmoid(Z)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Testing Data: ' + str(layer))
print('Forward Propagation Testing Data Complete')
return A
def dZ(dZ, W, Z):
Z = np.where(~np.isnan(Z), Z, 0)
W = np.where(~np.isnan(W), W, 0)
dZ = np.where(~np.isnan(dZ), dZ, 0)
Z = np.where(~np.isinf(Z), Z, 0)
W = np.where(~np.isinf(W), W, 0)
dZ = np.where(~np.isinf(dZ), dZ, 0)
if(hidden_layer_relu == True):
return np.multiply(np.dot(np.transpose(W), dZ), np.where(Z > 0, 1, 0))
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
return np.multiply(np.dot(np.transpose(W), dZ), 1- np.multiply(A, A))
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
return np.multiply(np.dot(np.transpose(W), dZ), np.multiply(A, (1-A)))
def dW(dZ, A):
return np.multiply(1/dZ.shape[1], np.dot(dZ, np.transpose(A)))
def dW_L2(dZ, A, W, epsilon):
return np.multiply(epsilon/Z.shape[1], W) + dW(dZ, A)
def dB(dZ):
return np.multiply(1/dZ.shape[1], np.sum(dZ))
def backward_propagation(W, B, Y, A_layers, Z_layers, _dZ, alpha, epsilon, layer, D, V_dW, V_dB, R_dW, R_dB, t):
if(layer >= 0):
if(layer == len(W) - 1):
_dZ = A_layers[layer+1] - Y
elif(layer >= 0):
_dZ = dZ(_dZ, W[layer+1], Z_layers[layer])
if(Dropout == True):
_dZ = np.multiply(_dZ, D[layer])
if(L2 == True):
_dW = dW_L2(_dZ, A_layers[layer], W[layer], epsilon)
else:
_dW = dW(_dZ, A_layers[layer])
_dB = dB(_dZ)
if(adam == True):
epsilon = 1e-6
# ADAM - RMSProp + Momentum
V_dW[layer] = np.multiply(momentum_coef, V_dW[layer]) + np.multiply(1-momentum_coef, _dW)
V_dB[layer] = np.multiply(momentum_coef, V_dB[layer]) + np.multiply(1-momentum_coef, _dB)
R_dW[layer] = np.multiply(RMSProp_coef, R_dW[layer]) + np.multiply(1-RMSProp_coef, np.multiply(_dW, _dW))
R_dB[layer] = np.multiply(RMSProp_coef, R_dB[layer]) + np.multiply(1-RMSProp_coef, np.multiply(_dB, _dB))
# index decay in bias correction
t = t + 1
# correct bias for initial rounds
V_dW[layer] = np.multiply(V_dW[layer], 1/(1-np.power(momentum_coef, t)))
V_dB[layer] = np.multiply(V_dB[layer], 1/(1-np.power(momentum_coef, t)))
R_dW[layer] = np.multiply(R_dW[layer], 1/(1-np.power(RMSProp_coef, t)))
R_dB[layer] = np.multiply(R_dB[layer], 1/(1-np.power(RMSProp_coef, t)))
val1 = 1/(np.sqrt(R_dW[layer])+ epsilon)
val2 = 1/(np.sqrt(R_dB[layer])+ epsilon)
W[layer] = W[layer] - np.multiply(alpha, np.multiply(V_dW[layer], val1 ))
B[layer] = B[layer] - np.multiply(alpha, np.multiply(V_dB[layer], val2 ))
elif(momentum == True):
V_dW[layer] = np.multiply(momentum_coef, V_dW[layer]) + np.multiply(alpha, _dW)
V_dB[layer] = np.multiply(momentum_coef, V_dB[layer]) + np.multiply(alpha, _dB)
W[layer] = W[layer] - V_dW[layer]
B[layer] = B[layer] - V_dB[layer]
else:
W[layer] = W[layer] - np.multiply(alpha, _dW)
B[layer] = B[layer] - np.multiply(alpha, _dB)
if(detailed_logger == True):
print('Backward Layer: ' + str(layer))
layer = layer - 1
W, B, t = backward_propagation(W, B, Y, A_layers, Z_layers, _dZ, alpha, epsilon, layer, D, V_dW, V_dB, R_dW, R_dB, t)
if(detailed_logger == True):
print('Backward Propagation Complete')
return W, B, t
def shuffle(X, Y, number_of_training_images):
random_array = np.random.permutation(np.arange(number_of_training_images))
return X[:, random_array], Y[random_array]
start_time = time.time()
# main loop
for epoch in range(1, number_of_epochs):
# logger
if(main_logger == True and epoch % main_logger_output_epochs == 0):
print('Main Loop Epoch: ' + str(epoch))
# saftey check
if(adam == True and momentum == True):
print("ERROR! Please Select Either Adam OR Momentum OR Neither, Not Both.")
break
# saftey check
if(hidden_layer_relu + hidden_layer_tanh + hidden_layer_sigmoid != 1):
print("ERROR! Please Select Only 1 Hidden Layer Activation Function")
break
# shuffle data
X, Y = shuffle(X_train.copy(), Y_train.copy(), number_of_training_images)
number_of_batches = int(np.floor(number_of_training_images/batch_size))
split_index = number_of_batches*batch_size
# parse into minibatches
X_minibatches = np.split(X[:, 0:split_index], number_of_batches, axis=1)
if not(split_index == number_of_training_images):
X_left_over_portion = X[:, split_index:number_of_training_images]
X_minibatches.append(X_left_over_portion)
Y_minibatches = np.split(Y[0:split_index], number_of_batches, axis=0)
if not(split_index == number_of_training_images):
Y_left_over_portion = Y[split_index:number_of_training_images]
Y_minibatches.append(Y_left_over_portion)
number_of_minibatches = len(Y_minibatches)
# logger
if(main_logger == True and epoch % main_logger_output_epochs == 0):
print('Number Of Minibatches: ' + str(number_of_minibatches))
for index in range(0, number_of_minibatches-1):
X_minibatch = X_minibatches[index]
Y_minibatch = Y_minibatches[index]
# forward propogation training data set
A_layers, Z_layers, D = forward_propagation_return_layers(W, B, X_minibatch, [X_minibatch], [], 0, [], keep_prob)
L = loss(A_layers[len(A_layers) - 1], Y_minibatch)
if(L2 == True):
C = cost_L2(L, W, epsilon)
else:
C = cost(L)
# backpropogation
W, B, t = backward_propagation(W, B, Y_minibatch, A_layers, Z_layers, 0, alpha, epsilon, len(W) - 1, D, V_dW, V_dB, R_dW, R_dB, t)
if(epoch % main_logger_output_epochs == 0):
print('Cost: ' + str(C))
# forward propogation test data set
A_test = forward_propagation(W, B, X_test, 0)
# accuracy
_prediction = prediction(A_test)
_accuracy = accuracy(_prediction, Y_test)
# storage for plotting
cost_array.append(C)
accuracy_array.append(_accuracy)
interation_array.append(epoch)
end_time = time.time()
run_time = end_time - start_time
print('')
print('Results:')
print('')
print('')
print('Run Time: ' + str(run_time) + ' seconds')
print('Cost: ' + str(C))
print('Accuracy: ' + str(_accuracy) + ' %')
print('')
print('')
pyplot.figure()
pyplot.plot(interation_array, cost_array, 'red')
pyplot.title('Learning Curve - ' + str(len(X[0])) + ' Training Data Set (Relu Hidden Layer)')
pyplot.xlabel('Epochs')
pyplot.ylabel('Cost')
pyplot.show()
# plot percent accuracy curve
pyplot.figure()
pyplot.plot(interation_array, accuracy_array, 'red')
pyplot.title('Percent Accuracy Curve - ' + str(len(X_test[0])) + ' Test Data Set (Relu Hidden Layer)')
pyplot.xlabel('Epochs')
pyplot.ylabel('Percent Accuracy')
pyplot.show()
###Output
Main Loop Epoch: 100
Number Of Minibatches: 1000
Cost: 0.0
Main Loop Epoch: 200
Number Of Minibatches: 1000
Cost: 0.0
Main Loop Epoch: 300
Number Of Minibatches: 1000
Cost: 0.0
Main Loop Epoch: 400
Number Of Minibatches: 1000
Cost: 0.0
Results:
Run Time: 1049.1028203964233 seconds
Cost: 0.0
Accuracy: 98.94 %
###Markdown
As illustrated, after 500 epochs with minibatches of 50 the cost became approximately 0.0 and the test data accuracy reached 98.84%. These results are very good. The test accuracy is high because minibatch stochastic gradient descent inately provides a form of regularization, combined with the ADAM (momentum and RMSProp) which prevents us from getting stuck on local minima, and from focusing too much on specific features based on large gradients. It is important to note that having a cost of zero usually means we have overfit the training data; however, in this senario that doesn't appear to be the case since we still have a very high test accuracy.We now wish to explore the impact of adjusting the RMSProp hyper-parameter size for Adam. We will reset the momentum hyper-paramter to 0.9 re-run the algorithm with smaller RMSProp hyper-paramters of .5 see what the results we achieve.First we reinitialize our weights and bias's.
###Code
# initialize weights & bias
np.random.seed(10)
print('Feature Size: ' + str(size))
lower_bound = -.1
upper_bound = .1
#mean = 0.015
#std = 0.005
# hyper-parameters: hidden layers
hidden_layers = 2
units_array = [20, 10]
Weights = []
Bias = []
V_dW = []
V_dB = []
R_dW = []
R_dB = []
for i in range(0, hidden_layers):
if(i == 0):
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], size]))
_B = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], 1]))
_V_dW = np.float64(np.zeros([units_array[i], size]))
_V_dB = np.float64(np.zeros([units_array[i], 1]))
_R_dW = np.float64(np.zeros([units_array[i], size]))
_R_dB = np.float64(np.zeros([units_array[i], 1]))
Weights.append(_W)
Bias.append(_B)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
else:
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], units_array[i-1]]))
_B = np.float64(np.random.uniform(lower_bound, upper_bound, [units_array[i], 1]))
_V_dW = np.float64(np.zeros([units_array[i], units_array[i-1]]))
_V_dB = np.float64(np.zeros([units_array[i], 1]))
_R_dW = np.float64(np.zeros([units_array[i], units_array[i-1]]))
_R_dB = np.float64(np.zeros([units_array[i], 1]))
Weights.append(_W)
Bias.append(_B)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
# output layer
_W = np.float64(np.random.uniform(lower_bound, upper_bound, [1, units_array[i]]))
_b = np.float64(np.random.uniform(lower_bound, upper_bound)) # b will be added in a broadcasting manner
_V_dW = np.float64(np.zeros([1, units_array[i]]))
_V_dB = np.float64(np.zeros(1))
_R_dW = np.float64(np.zeros([1, units_array[i]]))
_R_dB = np.float64(np.zeros(1))
Weights.append(_W)
Bias.append(_b)
V_dW.append(_V_dW)
V_dB.append(_V_dB)
R_dW.append(_R_dW)
R_dB.append(_R_dB)
Weights = np.array(Weights)
Bias = np.array(Bias)
V_dW = np.array(V_dW)
V_dB = np.array(V_dB)
R_dW = np.array(R_dW)
R_dB = np.array(R_dB)
for index in range(0, len(Weights) - 1):
Weights[index] = np.where(Weights[index] != 0, Weights[index], np.random.uniform(lower_bound, upper_bound))
#print(train_X.shape)
#print(np.ravel(train_Y).shape)
print('Weights Shape: ' + str(Weights[0].shape)) # matrix with a size of # of units X 784
print('Bias Shape: ' + str(Bias[0].shape)) # vector with a size of the # of unit
print('Velocity Weights Shape: ' + str(V_dW[0].shape)) # matrix with a size of # of units X 784
print('Velocity Bias Shape: ' + str(V_dB[0].shape)) # vector with a size of the # of unit
print('RMSProp Weights Shape: ' + str(R_dW[0].shape)) # matrix with a size of # of units X 784
print('RMSProp Bias Shape: ' + str(R_dB[0].shape)) # vector with a size of the # of unit
###Output
Feature Size: 784
Weights Shape: (20, 784)
Bias Shape: (20, 1)
Velocity Weights Shape: (20, 784)
Velocity Bias Shape: (20, 1)
RMSProp Weights Shape: (20, 784)
RMSProp Bias Shape: (20, 1)
###Markdown
Now we rerun our algorithm.
###Code
# gradient descent
detailed_logger = False
main_logger = True
main_logger_output_epochs = 100
L2 = False
Dropout = False
momentum = False
adam = True
hidden_layer_relu = True
hidden_layer_tanh = False
hidden_layer_sigmoid = False
# hyber-parameters
alpha = .01;
epsilon = .85
keep_prob = .9
number_of_epochs = 500
batch_size = 50
momentum_coef = .9
RMSProp_coef = .5
epsilon = 1e-20
t = 0
# copy initalization
W = Weights.copy()
B = Bias.copy()
# data arrays
cost_array = []
accuracy_array = []
interation_array = []
# rename
X_train = np.float64(training_images).copy()
Y_train = np.float64(training_labels).copy()
X_test = np.float64(testing_images).copy()
Y_test = np.float64(testing_labels).copy()
#m = size
m = number_of_training_images
def model(W, B, A):
return np.dot(W, A) + B
def activation_relu(Z):
Z = np.where(~np.isnan(Z), Z, 0)
Z = np.where(~np.isinf(Z), Z, 0)
return np.where(Z > 0, Z, 0)
def activation_tanh(Z):
return np.tanh(Z)
def activation_sigmoid(Z):
return 1/(1 + np.exp(-Z))
def loss(A, Y):
epsilon = 1e-20
return np.where((Y == 1), np.multiply(-Y, np.log(A + epsilon)), -np.multiply((1 - Y), np.log(1 - A + epsilon)))
#return np.multiply(-Y, np.log(A)) - np.multiply((1 - Y), np.log(1 - A))
def cost(L):
return np.multiply(1/L.shape[1], np.sum(L))
def cost_L2(L, W, epsilon):
L2 = np.multiply(epsilon/(2*W.shape[1]), np.multiply(W[len(W)-3], W[len(W)-3]).sum() + np.multiply(W[len(W)-2], W[len(W)-2]).sum() + np.multiply(W[len(W)-1], W[len(W)-1]).sum())
J = cost(L)
return L2 + J
def prediction(A):
return np.where(A >= 0.5, 1, 0)
def accuracy(prediction, Y):
return 100 - np.multiply(100/Y.shape[0], np.sum(np.absolute(Y - prediction)))
def forward_propagation_return_layers(W, B, A, A_layers, Z_layers, layer, D, keep_prob):
if(layer < len(W) - 1):
Z = model(W[layer], B[layer], A)
Z_layers.append(Z)
if(hidden_layer_relu == True):
A = activation_relu(Z)
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
if(Dropout == True):
_D = np.float64(np.where(np.random.uniform(0, 1, A.shape) < keep_prob, 1, 0))
D.append(_D)
A = np.multiply(A, _D)
A_layers.append(A)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Training Data: ' + str(layer))
A_layers, Z_layers, D = forward_propagation_return_layers(W, B, A, A_layers, Z_layers, layer, D, keep_prob)
elif(layer == len(W) - 1):
Z = model(W[layer], B[layer], A)
Z_layers.append(Z)
A = activation_sigmoid(Z)
if(Dropout == True):
_D = np.float64(np.where(np.random.uniform(0, 1, A.shape) < keep_prob, 1, 0))
D.append(_D)
A = np.multiply(A, _D)
A_layers.append(A)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Training Data: ' + str(layer))
print('Forward Propagation Training Data Complete')
return A_layers, Z_layers, D
def forward_propagation(W, B, A, layer):
if(layer < len(W) - 1):
Z = model(W[layer], B[layer], A)
if(hidden_layer_relu == True):
A = activation_relu(Z)
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Testing Data: ' + str(layer))
A = forward_propagation(W, B, A, layer)
elif(layer == len(W) - 1):
Z = model(W[layer], B[layer], A)
A = activation_sigmoid(Z)
layer = layer + 1
if(detailed_logger == True):
print('Forward Layer Testing Data: ' + str(layer))
print('Forward Propagation Testing Data Complete')
return A
def dZ(dZ, W, Z):
Z = np.where(~np.isnan(Z), Z, 0)
W = np.where(~np.isnan(W), W, 0)
dZ = np.where(~np.isnan(dZ), dZ, 0)
Z = np.where(~np.isinf(Z), Z, 0)
W = np.where(~np.isinf(W), W, 0)
dZ = np.where(~np.isinf(dZ), dZ, 0)
if(hidden_layer_relu == True):
return np.multiply(np.dot(np.transpose(W), dZ), np.where(Z > 0, 1, 0))
elif(hidden_layer_tanh == True):
A = activation_tanh(Z)
return np.multiply(np.dot(np.transpose(W), dZ), 1- np.multiply(A, A))
elif(hidden_layer_sigmoid == True):
A = activation_sigmoid(Z)
return np.multiply(np.dot(np.transpose(W), dZ), np.multiply(A, (1-A)))
def dW(dZ, A):
return np.multiply(1/dZ.shape[1], np.dot(dZ, np.transpose(A)))
def dW_L2(dZ, A, W, epsilon):
return np.multiply(epsilon/Z.shape[1], W) + dW(dZ, A)
def dB(dZ):
return np.multiply(1/dZ.shape[1], np.sum(dZ))
def backward_propagation(W, B, Y, A_layers, Z_layers, _dZ, alpha, epsilon, layer, D, V_dW, V_dB, R_dW, R_dB, t):
if(layer >= 0):
if(layer == len(W) - 1):
_dZ = A_layers[layer+1] - Y
elif(layer >= 0):
_dZ = dZ(_dZ, W[layer+1], Z_layers[layer])
if(Dropout == True):
_dZ = np.multiply(_dZ, D[layer])
if(L2 == True):
_dW = dW_L2(_dZ, A_layers[layer], W[layer], epsilon)
else:
_dW = dW(_dZ, A_layers[layer])
_dB = dB(_dZ)
if(adam == True):
epsilon = 1e-6
# ADAM - RMSProp + Momentum
V_dW[layer] = np.multiply(momentum_coef, V_dW[layer]) + np.multiply(1-momentum_coef, _dW)
V_dB[layer] = np.multiply(momentum_coef, V_dB[layer]) + np.multiply(1-momentum_coef, _dB)
R_dW[layer] = np.multiply(RMSProp_coef, R_dW[layer]) + np.multiply(1-RMSProp_coef, np.multiply(_dW, _dW))
R_dB[layer] = np.multiply(RMSProp_coef, R_dB[layer]) + np.multiply(1-RMSProp_coef, np.multiply(_dB, _dB))
# index decay in bias correction
t = t + 1
# correct bias for initial rounds
V_dW[layer] = np.multiply(V_dW[layer], 1/(1-np.power(momentum_coef, t)))
V_dB[layer] = np.multiply(V_dB[layer], 1/(1-np.power(momentum_coef, t)))
R_dW[layer] = np.multiply(R_dW[layer], 1/(1-np.power(RMSProp_coef, t)))
R_dB[layer] = np.multiply(R_dB[layer], 1/(1-np.power(RMSProp_coef, t)))
val1 = 1/(np.sqrt(R_dW[layer])+ epsilon)
val2 = 1/(np.sqrt(R_dB[layer])+ epsilon)
W[layer] = W[layer] - np.multiply(alpha, np.multiply(V_dW[layer], val1 ))
B[layer] = B[layer] - np.multiply(alpha, np.multiply(V_dB[layer], val2 ))
elif(momentum == True):
V_dW[layer] = np.multiply(momentum_coef, V_dW[layer]) + np.multiply(alpha, _dW)
V_dB[layer] = np.multiply(momentum_coef, V_dB[layer]) + np.multiply(alpha, _dB)
W[layer] = W[layer] - V_dW[layer]
B[layer] = B[layer] - V_dB[layer]
else:
W[layer] = W[layer] - np.multiply(alpha, _dW)
B[layer] = B[layer] - np.multiply(alpha, _dB)
if(detailed_logger == True):
print('Backward Layer: ' + str(layer))
layer = layer - 1
W, B, t = backward_propagation(W, B, Y, A_layers, Z_layers, _dZ, alpha, epsilon, layer, D, V_dW, V_dB, R_dW, R_dB, t)
if(detailed_logger == True):
print('Backward Propagation Complete')
return W, B, t
def shuffle(X, Y, number_of_training_images):
random_array = np.random.permutation(np.arange(number_of_training_images))
return X[:, random_array], Y[random_array]
start_time = time.time()
# main loop
for epoch in range(1, number_of_epochs):
# logger
if(main_logger == True and epoch % main_logger_output_epochs == 0):
print('Main Loop Epoch: ' + str(epoch))
# saftey check
if(adam == True and momentum == True):
print("ERROR! Please Select Either Adam OR Momentum OR Neither, Not Both.")
break
# saftey check
if(hidden_layer_relu + hidden_layer_tanh + hidden_layer_sigmoid != 1):
print("ERROR! Please Select Only 1 Hidden Layer Activation Function")
break
# shuffle data
X, Y = shuffle(X_train.copy(), Y_train.copy(), number_of_training_images)
number_of_batches = int(np.floor(number_of_training_images/batch_size))
split_index = number_of_batches*batch_size
# parse into minibatches
X_minibatches = np.split(X[:, 0:split_index], number_of_batches, axis=1)
if not(split_index == number_of_training_images):
X_left_over_portion = X[:, split_index:number_of_training_images]
X_minibatches.append(X_left_over_portion)
Y_minibatches = np.split(Y[0:split_index], number_of_batches, axis=0)
if not(split_index == number_of_training_images):
Y_left_over_portion = Y[split_index:number_of_training_images]
Y_minibatches.append(Y_left_over_portion)
number_of_minibatches = len(Y_minibatches)
# logger
if(main_logger == True and epoch % main_logger_output_epochs == 0):
print('Number Of Minibatches: ' + str(number_of_minibatches))
for index in range(0, number_of_minibatches-1):
X_minibatch = X_minibatches[index]
Y_minibatch = Y_minibatches[index]
# forward propogation training data set
A_layers, Z_layers, D = forward_propagation_return_layers(W, B, X_minibatch, [X_minibatch], [], 0, [], keep_prob)
L = loss(A_layers[len(A_layers) - 1], Y_minibatch)
if(L2 == True):
C = cost_L2(L, W, epsilon)
else:
C = cost(L)
# backpropogation
W, B, t = backward_propagation(W, B, Y_minibatch, A_layers, Z_layers, 0, alpha, epsilon, len(W) - 1, D, V_dW, V_dB, R_dW, R_dB, t)
if(epoch % main_logger_output_epochs == 0):
print('Cost: ' + str(C))
# forward propogation test data set
A_test = forward_propagation(W, B, X_test, 0)
# accuracy
_prediction = prediction(A_test)
_accuracy = accuracy(_prediction, Y_test)
# storage for plotting
cost_array.append(C)
accuracy_array.append(_accuracy)
interation_array.append(epoch)
end_time = time.time()
run_time = end_time - start_time
print('')
print('Results:')
print('')
print('')
print('Run Time: ' + str(run_time) + ' seconds')
print('Cost: ' + str(C))
print('Accuracy: ' + str(_accuracy) + ' %')
print('')
print('')
pyplot.figure()
pyplot.plot(interation_array, cost_array, 'red')
pyplot.title('Learning Curve - ' + str(len(X[0])) + ' Training Data Set (Relu Hidden Layer)')
pyplot.xlabel('Epochs')
pyplot.ylabel('Cost')
pyplot.show()
# plot percent accuracy curve
pyplot.figure()
pyplot.plot(interation_array, accuracy_array, 'red')
pyplot.title('Percent Accuracy Curve - ' + str(len(X_test[0])) + ' Test Data Set (Relu Hidden Layer)')
pyplot.xlabel('Epochs')
pyplot.ylabel('Percent Accuracy')
pyplot.show()
###Output
Main Loop Epoch: 100
Number Of Minibatches: 1000
Cost: 0.23567681362319903
Main Loop Epoch: 200
Number Of Minibatches: 1000
Cost: 0.235604659004334
Main Loop Epoch: 300
Number Of Minibatches: 1000
Cost: 0.4115037462206437
Main Loop Epoch: 400
Number Of Minibatches: 1000
Cost: 0.23921075030401118
Results:
Run Time: 1448.7536845207214 seconds
Cost: 0.23185894095927148
Accuracy: 90.39 %
|
experiments/feature-transformation-without-pipelines.ipynb | ###Markdown
Motivation for this notebook: most of the examples I found with feature transformation use scikit-learn pipelines (either `make_pipeline()` or `Pipeline()`).This method hides the individual steps that are executed when feature transformation is used.In this notebook we will do the transformation and model fitting in individual steps to better what understand what each steps does and how they are connected together. Sample data We will use this sample data multiple times in the notebook.
###Code
import numpy as np
# Trainig data
rng = np.random.RandomState(1)
x = 10 * rng.rand(100)
y = np.sin(x) + 0.1 * rng.randn(100)
# Test data
x_test = np.linspace(0, 10)
# scikit-learn expects features to be in a 2D array.
X = x[:, np.newaxis]
X_test = x_test[:, np.newaxis]
###Output
_____no_output_____
###Markdown
Using pipelines First, we will use a pipeline to we have a baseline to compare with later, when we remove the pipeline. Create a pipeline to add polynomial features to the training data and to train a model on the transformed data.
###Code
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
# Create a two-stage pipeline:
# Stage 1: feature transformation to add non-linear features
# Stage 2: the classifier (that will work on the tranformed data)
poly_model = make_pipeline(PolynomialFeatures(7),
LinearRegression())
poly_model.fit(X, y);
###Output
_____no_output_____
###Markdown
Use the trained model to predict values and plot the results.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# Plot raining data
plt.scatter(x, y)
# Plot test data and its prediction
y_test = poly_model.predict(X_test)
plt.plot(x_test, y_test);
print('The R^2 score for the fit is: ', poly_model.score(X, y))
###Output
The R^2 score for the fit is: 0.9806993128749515
###Markdown
Step-by-step, without a pipeline Now we will do the same sequence of steps (transform the data and train a model on that transformed data) without a pipeline.
###Code
# The features transformation step
poly = PolynomialFeatures(15)
# VERY IMPORTANT: fit only the training dataset, not the entire set
# to avoid leakage of test data into the training phase
x_poly = poly.fit_transform(X)
# The classifier step
clf = LinearRegression()
clf.fit(x_poly, y);
###Output
_____no_output_____
###Markdown
Use the trained classifier to predict values and plot the results.
###Code
# Plot training data
plt.scatter(x, y)
# Plot test data and its prediction
# Since we trained with a transformed dataset, we also need to
# transform the test dataset to match the features we used for training
# IMPORTANT: at this point we only `transform` - we don't `fit` again
# If we `fit` again, we will end up with different transformation (not what
# we trained the classifier on)
X_test_poly = poly.transform(X_test)
y_test_poly = clf.predict(X_test_poly)
plt.plot(x_test, y_test_poly);
print('The R^2 score for the fit is: ', clf.score(x_poly, y))
###Output
The R^2 score for the fit is: 0.9810652951916466
|
concepts/Data Structures/04 Trees/05 Diameter of a Binary Tree.ipynb | ###Markdown
Problem statementGiven the root of a binary tree, find the diameter.*Note: Diameter of a Binary Tree is the maximum distance between any two nodes*
###Code
class BinaryTreeNode:
def __init__(self, data):
self.left = None
self.right = None
self.data = data
def diameter_of_binary_tree(root):
"""
:param: root - Root of binary tree
TODO: Complete this method and return diameter (int) of binary tree
"""
pass
###Output
_____no_output_____
###Markdown
You can use the following function to test your code with custom test cases. The function `convert_arr_to_binary_tree` takes an array input representing level-order traversal of the binary tree.The above tree would be represented as `arr = [1, 2, 3, 4, None, 5, None, None, None, None, None]`Notice that the level order traversal of the above tree would be `[1, 2, 3, 4, 5]`. Note the following points about this tree:* `None` represents the lack of a node. For example, `2` only has a left node; therefore, the next node after `4` (in level order) is represented as `None`* Similarly, `3` only has a left node; hence, the next node after `5` (in level order) is represted as `None`.* Also, `4` and `5` don't have any children. Therefore, the spots for their children in level order are represented by four `None` values (for each child of `4` and `5`).
###Code
from queue import Queue
def convert_arr_to_binary_tree(arr):
"""
Takes arr representing level-order traversal of Binary Tree
"""
index = 0
length = len(arr)
if length <= 0 or arr[0] == -1:
return None
root = BinaryTreeNode(arr[index])
index += 1
queue = Queue()
queue.put(root)
while not queue.empty():
current_node = queue.get()
left_child = arr[index]
index += 1
if left_child is not None:
left_node = BinaryTreeNode(left_child)
current_node.left = left_node
queue.put(left_node)
right_child = arr[index]
index += 1
if right_child is not None:
right_node = BinaryTreeNode(right_child)
current_node.right = right_node
queue.put(right_node)
return root
# Solution
def diameter_of_binary_tree(root):
return diameter_of_binary_tree_func(root)[1]
def diameter_of_binary_tree_func(root):
"""
Diameter for a particular BinaryTree Node will be:
1. Either diameter of left subtree
2. Or diameter of a right subtree
3. Sum of left-height and right-height
:param root:
:return: [height, diameter]
"""
if root is None:
return 0, 0
left_height, left_diameter = diameter_of_binary_tree_func(root.left)
right_height, right_diameter = diameter_of_binary_tree_func(root.right)
current_height = max(left_height, right_height) + 1
height_diameter = left_height + right_height
current_diameter = max(left_diameter, right_diameter, height_diameter)
return current_height, current_diameter
def test_function(test_case):
arr = test_case[0]
solution = test_case[1]
root = convert_arr_to_binary_tree(arr)
output = diameter_of_binary_tree(root)
print(output)
if output == solution:
print("Pass")
else:
print("Fail")
arr = [1, 2, 3, 4, 5, None, None, None, None, None, None]
solution = 3
test_case = [arr, solution]
test_function(test_case)
arr = [1, 2, 3, 4, None, 5, None, None, None, None, None]
solution = 4
test_case = [arr, solution]
test_function(test_case)
arr = [1, 2, 3, None, None, 4, 5, 6, None, 7, 8, 9, 10, None, None, None, None, None, None, 11, None, None, None]
solution = 6
test_case = [arr, solution]
test_function(test_case)
###Output
6
Pass
|
flask_app_pizza.ipynb | ###Markdown
Initial Examples **1. Using the print statement and variables**The print statement is used to display any value or sentence that you assign. To print a statement, you write 'print' with parentheses containing the content. There are 3 ways to print information: with a string of characters, using variables, and a combination of string and variables. Note that in order to incorporate variables and a string of characters, there needs to be an 'f' in the parentheses and the variable has curly-cew brackets around the values.
###Code
print("Hello World!")
champagne_bucket = 1
print(champagne_bucket)
champagne_bucket= 1
print(f"The volume of champagne: {champagne_bucket} liters")
###Output
_____no_output_____
###Markdown
**2. Use of data types**We use different kinds of data types to suit our needs. Below mentioned are the 4 major data types.
###Code
int_example = 10 // a positive or negative value with decimals
float_example = 1.592 // a positive or negative value with decimals
char_example = 'A' // a character
string_example = 'This is a sentence.' // a string of characters (we just shortform that to string)
print(f"The value of example is {float_example}")
###Output
_____no_output_____
###Markdown
**3. Conversion of data types**Upon conversion from float to integer type, the non-decimal part is taken as the integer. It does not follow the classic rounding-off theory
###Code
converted_int = int(float_example)
print("Converted float_example to integer:",converted_int)
char_example = '123'
print("Converted char_example to integer:", int(char_example))
###Output
_____no_output_____
###Markdown
**3. Use of Lists** ***Here we define the shopping list***Assume the shopping list contains our daily requirements. Here we maintain a list of items to be bought. We can add and remove items as required.
###Code
list_example = ['eggs','potatoes','cereal']
print(list_example)
###Output
_____no_output_____
###Markdown
***Let us add another item to the shopping list***
###Code
list_example.append('bacon')
print(list_example)
###Output
_____no_output_____
###Markdown
***Turns out we have eggs at home and we don't need it to be on the shopping list***
###Code
list_example.remove('eggs')
print(list_example)
###Output
_____no_output_____
###Markdown
**4. Use of Dictionaries** ***Here we define the dictionary, just like the real one and print the meaning of the word Noun***Dictionaries are analogous form of data storage. It contains one 'key' which is associated with a 'value'. The word here is the 'key' and the meaning of the word is 'value'.
###Code
dict_example = {'Noun':'A word used to identify any of a class of people, places, or things ','Verb':'a word used to describe an action, state, or occurrence, and forming the main part of the predicate of a sentence, such as hear, become, happen.','Adjective':'A word naming an attribute of a noun, such as sweet, red, or technical'}
print(dict_example['Noun'])
###Output
_____no_output_____
###Markdown
*Let us print the dictionary*
###Code
print(dict_example)
###Output
_____no_output_____
###Markdown
**5. Use of the if statement**We take the input form the user and place it in a variable called 'outside'. Based on what is contained in the variable 'outside', i.e. If the weather outside is hot, we print, I want a cold beverage, If the weather outside is cold, we print, I want a hot beverage.
###Code
outside = input()
if outside == 'hot':
print('I want a cold beverage')
elif outside == 'cold':
print('I want a hot beverage')
###Output
_____no_output_____
###Markdown
6. Use of loops **Here we let the loop run 10 times**'var' is the increment counter. In both the loop types, we allow the varibale 'var' to range from 0-9. The progression of the var variable is understood with the print statement.
###Code
var = 0
while var<10:
print(var)
var = var+1
for var in range(0,10):
print(var)
###Output
_____no_output_____
###Markdown
**7. Use of functions**'def' is a keyword that starts of the definition of the function, 'add' is the name of the function, a and b are function parameters that will be operated on inside the function 'return' keyword sends the output onto where the function is called.
###Code
def add(a,b):
return a+b
sum = add(5,1)
print(sum)
###Output
_____no_output_____
###Markdown
**8. Pip install example**We use the pip package to install the pandas, numpy and matplotlib librariesThe '!' symbol is used to tell the notebook to run a shell command
###Code
!pip install pandas
!pip install numpy
!pip install matplotlib
###Output
_____no_output_____
###Markdown
Flask App Example ***Here we install the 'flask-ngrok' library which helps us create a server and deploy it on the ngrok platform***flask is a python library that is a basic backend framework. It allows the hosting of webapps with the help of a python code base.
###Code
!pip install flask-ngrok
###Output
_____no_output_____
###Markdown
***We download the repository containing the flask app code***
###Code
!git clone https://github.com/H10AI/PythonCrashCourse-PizzaPricePrediction.git
###Output
_____no_output_____
###Markdown
***Note***: 1. Upload the 'index.html' and 'Pizza-Price.csv' files from the github link that was provided.2. Drag and drop 'Pizza-Price.csv' into the Files section of google colab.3. In the Files section of Colab, right-click and create a new folder, name it 'templates'.4. Drag and drop the 'index.html' file into the templates folder.5. Continue to the next code snippet ***Task 1: Write code to convert takka to CAD***
###Code
def convert_to_cad(price):
#add conversion code here
return price
###Output
_____no_output_____
###Markdown
***The web app is hosted on the ngrok platform, click the ngrok link to interact with the web app***
###Code
from flask import Flask, request, render_template
import pickle
import numpy as np
from flask_ngrok import run_with_ngrok
filename = '/content/PythonCrashCourse-PizzaPricePrediction/pizza_price_final.sav'
app = Flask(__name__)
run_with_ngrok(app)
def get_pred_price(loaded_model,inch,restau,cheese,musroom,spicy):
hotel={'A':1,'B':2,'C':3,'D':4,'E':5,'F':6,'G':7,'H':8,'I':9,'J':10,'K':11,'L':12,'M':13,'N':14,'O':15,'P':16,'Q':17,'R':18,'S':19,'T':20}
param=[inch,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,cheese,musroom,spicy]
if restau in hotel.keys():
param[hotel[restau]]=1
price=loaded_model.predict([np.array(param)])
return price
loaded_model = pickle.load(open(filename, 'rb'))
predict = 0
restuarant = 'None'
yesno = {'yes':1,'no':0}
@app.route('/', methods=['POST','GET'])
def my_form_post():
global predict
global restuarant
error = ""
if request.method == 'GET':
# Form being submitted; grab data from form.
if request.args:
size = request.args.get('Size')
restuarant = request.args.get('Restuarant')
extr_cheese = yesno[request.args.get('extr_cheese')]
extr_mushroom = yesno[request.args.get('extr_mushroom')]
extr_spicy = yesno[request.args.get('extr_spicy')]
#print(size,restuarant,extr_cheese,extr_mushroom,extr_spicy
price = get_pred_price(loaded_model,size,restuarant,extr_cheese,extr_mushroom,extr_spicy)
predict = price[0]
predict = convert_to_cad(price)
print(predict)
# Render the sign-up page
return render_template('index.html', prediction_text='Price of the Pizza from Restaurant {} is $ {}'.format(restuarant,predict))
if __name__ == "__main__":
app.run()
###Output
_____no_output_____
###Markdown
Data Vizualisation
###Code
from google.colab import files
files.upload()
import pandas as pd
from matplotlib import pyplot as plt
df = pd.read_csv('Pizza-Price.csv')
df
#Price vs Size
x = df[' Size by Inch']
y = df['Price']
plt.scatter(x,y)
plt.show()
###Output
_____no_output_____
###Markdown
***Task 1: Display the scatter plot between Restaurants and Price***
###Code
#your code here
plt.bar(x, y)
df_plot = df[['Restaurant','Extra Mushroom']]
df_plot.head()
###Output
_____no_output_____
###Markdown
***Data manipulation using pandas and the if statement***
###Code
#conversion of columns
counter=0
for value in df['Extra Mushroom']:
if value == 'yes':
df['Extra Mushroom'][counter] = 1
else:
df['Extra Mushroom'][counter] = 0
counter+=1
df['Extra Mushroom']
#plot frequency
plt.hist(df['Extra Mushroom'], bins=10)
plt.gca().set(title='Frequency Histogram of Extra Mushroom', ylabel='Frequency');
###Output
_____no_output_____ |
data/data_scripts/Data_Pipeline_Process.ipynb | ###Markdown
XML-> CONLL Domain specific1. (Optional if data present in CONLL Format) Specify path to XML data, mention laptop vs rest; train vs test2. The following is performed: first tagging based on aspect and opinion lists- if aspect or opinion is more than a word then it is tagged as B. and I.3. NOISY EXAMPLES (Plenty misspellings and inconsistencies in tagging are removed through supervision)4. Data is then outputted in CONLL FormatCONLL-> Joint shared vocab and Joint domain labelled vocab 1. IMPORTANT: Domain name has to be included in CONLL format Joint shared vocab -> Trimmed glove vectorsJoint domain labelled vocab -> Trimmed w2vec and geo vectors1. 2.
###Code
import xml.etree.ElementTree as ET
import os
import csv
import sys
import re
import numpy as np
import string
import pickle
xml_path = "./XML_to_CONLL/xmls/Restaurants_Train.xml"
opinion_path = './XML_to_CONLL/Opinions/train_restaurant'
output_csv_path = "./XML_to_CONLL/Rest_Processed.pkl"
tree = ET.parse(xml_path)
with open(opinion_path,'r') as f: #opinions for dataset
opinion_list = f.readlines()
root = tree.getroot()
sentence = root[0][0].text.replace(u'\xa0',u' ')
#sentence = sentence.translate(string.punctuation)
sentence
#If aspect split
#len(root)
len(root[252])
root[252][2]
opinions = opinion_list[60]
opinions = re.sub(',','',opinions) #removing commas
ops = re.split('([-+][01])?[\r]?[\n]?', re.sub(',','',opinions)) #splitting
opin_words = list(map(lambda x: x.strip(), ops[:-1:2])) #trimming opinion words
opin_words, opinions
'_ '.strip()
x = ["Hello","How-are","you!", "doing ?", "toda-y","-","- "]
temp = []
for w in x:
if(w not in ['-','- ',' -']):
temp.extend(w.split('-'))
temp
temp = [u'i', u'charge', u'it', u'at', u'night', u'and', u'skip', u'taking', u'the', u'cord', u'with', u'me', u'because', u'of', u'the', u'good', u'battery', u'life.']
temp = map(lambda x: x.encode('ascii','replace'),temp)
map(lambda x: x.translate(None, string.punctuation),temp)
#tag_to_id = {start_tag:0,end_tag:-1, "BA":1, "IA":2, "BO":3, "IO":4, "OT":5}
def convert_XML_to_CONLL(xml_path, opinion_path, output_csv_path, domain="Laptop", to_lower = True):
error_list = []
resultant = []
res_sentences = []
res_tags = []
tree = ET.parse(xml_path) #xml dataset
with open(opinion_path,'r') as f: #opinions for dataset
opinion_list = f.readlines()
root = tree.getroot()
if(domain=="Laptop"):
opinion_list[2702] = 'highly dissatisfied'
opinion_list[1404] = 'completely immobile'
for i in range(len(root)):
#1) Get tokenized sentence and internally tokenize it
sentence = root[i][0].text.replace(u'\xa0',u' ')
sentence_tokenized = sentence.split()
internally_tokenized_sen = []
for word in sentence_tokenized:
word = word.strip()
if(to_lower):
word = word.lower()
if(word not in ['-']):
internally_tokenized_sen.extend(word.split('-'))
sentence_tokenized = internally_tokenized_sen
#print(sentence_tokenized)
sentence_tokenized = map(lambda x: x.encode('ascii','replace'), sentence_tokenized)
sentence_tokenized = map(lambda x: x.translate(None, string.punctuation), sentence_tokenized)
#sentence_id = root[i].attrib['id']
tags = ['OT' for word_ in sentence_tokenized]
#2) Get opinion words -> internally tokenized
opinions = opinion_list[i]
opinions = re.sub(',','',opinions) #removing commas
ops = re.split('([-+][01])?[\r]?[\n]?', re.sub(',','',opinions)) #splitting
opin_words = list(map(lambda x: x.strip(), ops[:-1:2])) #trimming opinion words
internally_tokenized_opins = []
#opin_words = list(map(lambda x:x.split(),opin_words))
for word in opin_words:
word = word.strip()
if(word in ['NIL']):
continue
if(to_lower):
word = word.lower()
if(word not in ['-']): #we don't want to include a hyphen
if("-" in word):
internally_tokenized_opins.extend(word.split('-'))
elif("n't" in word):
splits = word.split("n't")
splits.remove("")
splits = map(lambda x:x.strip(), splits)
splits_f = []
for w in splits:
splits_f.extend(w.split())
internally_tokenized_opins.extend(splits_f)
else:
internally_tokenized_opins.extend(word.split())
opin_tokenized = internally_tokenized_opins
try:
opin_tokenized = map(lambda x: x.encode('ascii','ignore'), opin_tokenized)
except:
print(opin_tokenized,i)
opin_tokenized = map(lambda x: x.translate(None, string.punctuation), opin_tokenized)
#3) Get aspect words --> internally tokenized
aspects = []
if(len(root[i])>1): #aspect words exist
aspects = []
for j in range(len(root[i][1])):
aspects.append(root[i][1][j].attrib['term'].replace(u'\xa0',u' '))
internally_tokenized_aspects = []
final_internally_tokenized_aspects = []
aspects_tokenized = []
for word in aspects:
word = word.strip()
if(to_lower):
word = word.lower()
if(word not in ['-']): #we don't want to include a hyphen
if("-" in word):
internally_tokenized_aspects.extend(word.split('-'))
elif("n't" in word):
splits = word.split("n't")
splits.remove("")
splits = map(lambda x:x.strip(), splits)
splits_f = []
for w in splits:
splits_f.extend(w.split())
internally_tokenized_aspects.extend(splits_f)
else:
internally_tokenized_aspects.extend(word.split())
for word in internally_tokenized_aspects:
final_internally_tokenized_aspects.extend(word.split())
aspects_tokenized = final_internally_tokenized_aspects
aspects_tokenized = map(lambda x: x.encode('ascii','replace'), aspects_tokenized)
aspects_tokenized = map(lambda x:x.translate(None, string.punctuation), aspects_tokenized)
#4) Do Process
#Process
#Go through each word in sentence tokenized
#Check if direct match for aspect/opinion
#If yes then change tag at that position
#2.2) If previous seq term was B-> then assign I
#remove the aspect/opinion term
#assert that aspect and opinion terms list is empty else there was an error
for loc,word in enumerate(sentence_tokenized):
if(word in aspects_tokenized):
if(loc>0 and (tags[loc-1]=='BA' or tags[loc-1]=='IA')):
tags[loc] = 'IA'
else:
tags[loc] = 'BA'
aspects_tokenized.remove(word)
elif(word in opin_tokenized):
if(loc>0 and (tags[loc-1]=='BO' or tags[loc-1]=='IO')):
tags[loc] = 'IO'
else:
tags[loc] = 'BO'
opin_tokenized.remove(word)
if(len(aspects_tokenized)>0 or (len(opin_tokenized)>0 and opin_tokenized!=['nil'] )):
#Error case-> mostly due to misspellings
#print(opinions)
print(i, opin_tokenized, sentence_tokenized)
#print(sentence_tokenized)
#print("Error at: ",i)
opin_tokenized_front = []
opin_tokenized_back = []
for op in opin_tokenized:
opin_tokenized_front.append(op[:len(op)/2]) #CHeck for misspellings
opin_tokenized_back.append(op[len(op)/2:])
for loc_i,word in enumerate(sentence_tokenized):
loc_j = 0
for opin_front,opin_back in zip(opin_tokenized_front,opin_tokenized_back):
if(opin_front in word):
print(i, opin_front, word)
x = int(input("Change word?"))
if(x==1):
word = opin_tokenized[loc_j]
if(loc_i>0 and tags[loc_i-1]=='BO'):
tags[loc_i] = 'IO'
else:
tags[loc_i] = 'BO'
elif(opin_back in word):
print(i, opin_back,word)
x = int(input("Change word?"))
if(x==1):
word = opin_tokenized[loc_j]
if(loc_i>0 and tags[loc_i-1]=='BO'):
tags[loc_i] = 'IO'
else:
tags[loc_i] = 'BO'
loc_j+=1
error_list.append(i)
resultant.append([sentence_tokenized, tags])
#Save as csv
#Output as CONLL format
return error_list, resultant
z = convert_XML_to_CONLL(xml_path, opinion_path, '.', "Rest")
len(z[1])
root[0], len(z[1])
errors= []
for i in range(len(z[1])):
aspects_tokenized = []
if(len(root[i])>1): # no aspect word
aspects = []
for j in range(len(root[i][1])):
try:
aspects.append(root[i][1][j].attrib['term'].replace(u'\xa0',u' '))
except:
print(root[i][1][j])
internally_tokenized_aspects = []
final_internally_tokenized_aspects = []
aspects_tokenized = []
for word in aspects:
word = word.strip()
if(True):
word = word.lower()
if(word not in ['-']): #we don't want to include a hyphen
if("-" in word):
internally_tokenized_aspects.extend(word.split('-'))
elif("n't" in word):
splits = word.split("n't")
splits.remove("")
splits = map(lambda x:x.strip(), splits)
splits_f = []
for w in splits:
splits_f.extend(w.split())
internally_tokenized_aspects.extend(splits_f)
else:
internally_tokenized_aspects.extend(word.split())
for word in internally_tokenized_aspects:
final_internally_tokenized_aspects.extend(word.split())
aspects_tokenized = final_internally_tokenized_aspects
aspects_tokenized = map(lambda x: x.encode('ascii','replace'), aspects_tokenized)
aspects_tokenized = map(lambda x:x.translate(None, string.punctuation), aspects_tokenized)
for loc,word in enumerate(z[1][i][0]):
tags = z[1][i][1]
if(word in aspects_tokenized):
if(loc>0 and (tags[loc-1]=='BA' or tags[loc-1]=='IA')):
tags[loc] = 'IA'
else:
tags[loc] = 'BA'
aspects_tokenized.remove(word)
if(word[:-1] in aspects_tokenized):
if(loc>0 and (tags[loc-1]=='BA' or tags[loc-1]=='IA')):
tags[loc] = 'IA'
else:
tags[loc] = 'BA'
aspects_tokenized.remove(word[:-1])
word = word[:-1]
if(len(aspects_tokenized)>0):
errors.append([i, aspects_tokenized, z[1][i]])
print(i,aspects_tokenized, z[1][i][0])
len(errors)
root[41][1][0].attrib['term']
errors[1]
z[1][2964][1][-2] = 'BO'
#z[1][400][1][8]= "BO" #, opinion_list[400]
#z[1][381][1][4] = "BO"<-- more than that have to change perfect etc
#z[1][415][0][3] = 'easy'
#z[1][415][1][3] = "BO"
#z[1][453][0][6] = "recommend"
#z[1][453][1][6] = "BO"
#z[1][487][0][3] = "recommend"
#z[1][487][1][3] = "BO"
#z[1][497][0] = ['it',
#z[1][497][1] = ['OT' for x in z[1][497][0]]
#z[1][576][1][8] = 'BO'
#z[1][580][0][-3] = 'easy'
#z[1][580][1][-3] = 'BO'
#z[1][591][0][-1] = 'dependable'
#z[1][591][1][-1]= 'BO'
#z[1][834][0] = ['with',
#z[1][834][1] = ['OT' for x in z[1][834][0]]
#z[1][834][1][7] = "BO"
#z[1][910][1][10] ="BO"
#z[1][937][0][21] = 'recommend'
#z[1][937][1][21] ="BO"
#z[1][1337][0][0] = 'comfortable'
#z[1][1337][1][0] = 'BO'
#z[1][1487][1][3] = 'IO'
#z[1][1652][1][3] = 'BO'
#z[1][1690][1][-3] = 'BO'
#z[1][2129][1][4] = 'BO'
#z[1][2238][0][4] = 'BO'
#z[1][2331][0][-1]="convenient"
#z[1][2331][1][-1] = "BO"
#z[1][2555][1][-7] = "BO"
#z[1][2601][0][-1] = 'sensitive'
z[1][2940] =[['strengths', 'well', 'shaped', 'weaknesses', 'a', 'bad', 'videocard'],['OT','BO','BA','OT','OT','BO','BA']]
#= ['strengths','well', 'shaped', 'weaknesses','a', 'bad', 'videocard']
z[1][35]
with open("FINAL_Rest_F.pkl",'w') as p1:
pickle.dump(z[1], p1)
#with open("FINAL_Rest.pkl",'r') as p1:
# z_f = pickle.load(p1)
z_f[1]
vocab = pickle.load(open('../vocab_to_id.pkl','r'))
embeddings = np.load('../Embeddings/Pruned/np_Laptopw2vec_200d_trimmed.npz')['embeddings']
vocab['windows-xp']
embeddings[6307]
import pickle
domain = "Rest"
tr_data_path = "./XML_to_CONLL/Rest_tr_list.pkl"
tr_split = 85
tr_output_path = "./{}_".format(tr_domain)
def tr_pkl_to_CONLL(domain, tr_data_path, tr_split = 85, tr_output_path = "./"):
output_path_tr= tr_output_path+"{}_training_data.txt".format(domain)
output_path_dev = tr_output_path+"{}_dev_data.txt".format(domain)
tr_list = pickle.load(open(tr_data_path,'r'))
training_last_index = int((tr_split/100.0)*len(tr_list))
print(training_last_index)
training_list = tr_list[:training_last_index]
dev_list = tr_list[training_last_index:]
with open(output_path_tr,'w') as f1:
for words,tags in training_list:
for i, word in enumerate(words):
f1.write(word + " "+ tags[i]+"\n")
f1.write("\n")
print("Written output for training file to: {}".format(output_path_tr))
with open(output_path_dev,'w') as f1:
for words,tags in dev_list:
for i, word in enumerate(words):
f1.write(word + " "+tags[i]+"\n")
f1.write("\n")
print("Written output for dev file to: {}".format(output_path_dev))
with open('../old_samples/test.txt') as f1:
x = f1.readlines()
tr_pkl_to_CONLL(domain, tr_data_path)
with open('./Rest_dev_data.txt') as f1:
x2 = f1.readlines()
x[0:10],x2[0:19]
x2[0]
#Now to make a global vocab and trimmed word vectors
###Output
_____no_output_____ |
nb_case_core_synthetic_Ce01.ipynb | ###Markdown
Covariance model
###Code
core.cov_model_taper(r_at = core.r_cmb, tap_to = 500, tap_exp_p1 = 5, tap_exp_p2 = 2,
tap_scale_start = 0, tap_scale_end = 24, plot_taper = True,
save_fig = False, save_string = "case_core_synthetic", save_dpi = 300)
###Output
All eigenvalues > 0: True
Cov model is pos def: True
###Markdown
Synthetic sat
###Code
C_e_const = 0.1
s_sat = SDSS(comment, N_SH = shc_g, sim_type = "core", sat_height = 350, N_SH_secondary = None)
s_sat.load_swarm("A")
grid_in = np.array([s_sat.grid_phi, 90-s_sat.grid_theta]).T
s_sat.make_grid(s_sat.grid_radial, grid_in, calc_sph_d = False)
s_sat.generate_map(grid_type = "swarm")
s_sat.data += np.random.normal(scale = 2.0, size = s_sat.data.shape)
###Output
_____no_output_____
###Markdown
Source truth
###Code
s_source = SDSS(comment, N_SH = shc_g, sim_type = "core", sat_height = 350, N_SH_secondary = None)
s_source.grid_glq(nmax = shc_grid, r_at = core.r_cmb)
grid_in = np.array([s_source.grid_phi, 90-s_source.grid_theta]).T
s_source.make_grid(s_source.r_cmb, grid_in, calc_sph_d = False)
s_source.generate_map()
###Output
_____no_output_____
###Markdown
System equations
###Code
core.integrating_kernel(s_sat, C_e_const = C_e_const, C_mm_supply = core.C_ens_tap)
###Output
_____no_output_____
###Markdown
SDSSIM
###Code
N_sim = 100
core.target_var = np.max(core.C_ens_tap)
core.run_sim(N_sim, core.grid_N, core.C_mm_all, core.C_dd, core.C_dm_all, core.G,
s_sat.data, core.data, scale_m_i = True, unit_d = False, collect_all = True,
sense_running_error = True, save_string = nb_name, sim_stochastic = False, solve_cho = True)
core.realization_to_sh_coeff(core.r_cmb, set_nmax = shc_grid)
#core.covmod_lsq_equiv(s_sat.data, C_Br_model, core.G, core.r_cmb)
# Reload plot module when making small changes
import importlib
importlib.reload(mt_util)
#truth_obj = s_source
core.grid_glq(nmax = shc_grid, r_at = core.r_cmb)
#m_mode = m_DSS_mode
mt_util.plot_sdssim_reproduce(core, core.m_DSS_res, m_equiv_lsq = None, truth_obj = s_source,
lags_use = 1000, spec_r_at = core.r_cmb, spec_show_differences = False,
spec_ti_ens = True, lwidth = 0.6, lwidth_div = 3, lwidth_mult = 2,
label_fontsize = "small",
res_use = True, sv_use = False, unit_field = "[mT]", hist_ti_ens_limit = [-6,6],
unit_transform_n_to_m = True, patch_legend = True, ens_prior = True,
model_dict = {}, figsize=(9,16), hist_ti_ens = "all", hist_density = False,
hist_bins = 21, res_bins = 21, hist_pos_mean = False,
left=0.08, bottom=0.12, right=0.92, top=0.95, wspace = 0.2, hspace=0.25,
savefig = False, save_string = "case_core_synthetic",
save_dpi = 100, save_path = "images/")
print(core)
core.pickle_save_self(nb_name)
list_coord = np.array([[0,2], [-30,30], [45,-45], [70,-170]])
list_coord[:,0] = 90 - list_coord[:,0]
list_coord[:,1][list_coord[:,1]<0.0] = 360 + list_coord[:,1][list_coord[:,1]<0.0]
m_coord_sph = np.hstack((90 - core.lat.reshape(-1,1), core.lon.reshape(-1,1)))
idx_min = []
for coord in list_coord:
idx_min.append(np.sum(np.abs(m_coord_sph - coord),axis=1).argmin())
print(idx_min)
m_hists_coord = m_coord_sph[idx_min]
m_hists = core.m_DSS[idx_min,:]
left=0.08
bottom=0.12
right=0.92
top=0.95
wspace = 0.2
hspace=0.25
color_rgb_zesty_pos = (1.0, 0.5372549019607843, 0.30196078431372547)
color_rgb_zesty_neg = (0.5019607843137255, 0.6862745098039216, 1.0)
m_hists_scale = m_hists*10**(-6)
tile_size_row = 2
tile_size_column = 2
label_fontsize = 10
fig = plt.figure(figsize=(9,9)) # Initiate figure with constrained layout
# Generate ratio lists
h_ratio = [1]*tile_size_row
w_ratio = [1]*tile_size_column
gs = fig.add_gridspec(tile_size_row, tile_size_column, height_ratios=h_ratio, width_ratios=w_ratio) # Add x-by-y grid
for i in np.arange(0,list_coord.shape[0]):
ax = fig.add_subplot(gs[i])
y,binEdges=np.histogram(m_hists_scale[i,:],bins=11,density=True)
bincenters = 0.5*(binEdges[1:]+binEdges[:-1])
ax.plot(bincenters, y, '-', color = color_rgb_zesty_neg,
label='{}'.format(str(np.round(m_hists_coord[i,:],decimals=1))).lstrip('[').rstrip(']'),
linewidth = 1)
#ax.set_title('test')
#ax.annotate("test", (0.05, 0.5), xycoords='axes fraction', va='center', fontsize = label_fontsize)
ax.set_xlabel("Field value [mT]")
ax.set_ylabel("PDF")
ax.legend(loc='best', fontsize = label_fontsize)
fig.subplots_adjust(left=left, bottom=bottom, right=right, top=top, wspace=wspace, hspace=hspace)
#core.grid_glq(nmax = 256, r_at = core.r_cmb)
#core.grid_glq(nmax = 120, r_at = core.r_cmb)
core.grid_glq(nmax = 400, r_at = core.r_cmb)
set_nmax = shc_grid
core.ensemble_B(core.g_spec, nmax = set_nmax, r_at = core.r_cmb, grid_type = "glq")
zs_eqa = core.B_ensemble[:,0,:].copy()
#core.g_spec_mean = np.mean(core.g_spec,axis=1)
core.ensemble_B(core.g_spec_mean, nmax = set_nmax, r_at = core.r_cmb, grid_type = "glq")
zs_mean_eqa = core.B_ensemble[:,0].copy()
#core.ensemble_B(core.g_prior[:mt_util.shc_vec_len(set_nmax)], nmax = set_nmax, r_at = core.r_cmb, grid_type = "glq")
core.ensemble_B(s_sat.g_prior[:mt_util.shc_vec_len(set_nmax)], nmax = set_nmax, r_at = core.r_cmb, grid_type = "glq")
prior_eqa = core.B_ensemble[:,0].copy()
#core.ensemble_B(core.g_lsq_equiv, nmax = set_nmax, r_at = core.r_cmb, grid_type = "glq")
#lsq_eqa = core.B_ensemble[:,0].copy()
# Reload plot module when making small changes
import importlib
importlib.reload(mt_util)
# ccrs.PlateCarree()
# ccrs.Mollweide()
# ccrs.Orthographic(central_longitude=0.0, central_latitude=0.0)
mt_util.plot_ensemble_map_tiles(core.grid_phi, 90-core.grid_theta, zs_eqa,
field_compare = prior_eqa, field_lsq = None, field_mean = zs_mean_eqa,
tile_size_row = 3, tile_size_column = 2,
figsize=(9,12), limit_for_SF = 10**6, point_size = 0.1, cbar_mm_factor = 1, cbar_limit = [-1.6,1.6],
coast_width = 0.4, coast_color = "grey", unit_transform_n_to_m = True,
cbar_h = 0.1, cbar_text = "mT", cbar_text_color = "black",
left=0.03, bottom=0.12, right=0.97, top=0.95, wspace = 0.05, hspace=0.25,
savefig = False, save_string = "case_core_synthetic",
projection = ccrs.Mollweide(), use_gridlines = True,
gridlines_width = 0.4, gridlines_alpha = 0.4, save_dpi = 100)
###Output
_____no_output_____ |
MileStone_Project-2_War.ipynb | ###Markdown
Declaring these as tuples is beneficial as they can no longer be changed later in the code
###Code
suits = ("Hearts", "Diamonds", "Clubs", "Spades")
ranks = ('Two','Three','Four','Five','Six','Seven','Eight','Nine','Ten','Jack','Queen','King','Ace')
values = {'Two':2, 'Three':3, 'Four':4, 'Five':5, 'Six':6, 'Seven': 7, 'Eight': 8, 'Nine': 9, 'Ten': 10, 'Jack': 11, 'Queen': 12, 'King': 13, 'Ace': 14}
###Output
_____no_output_____
###Markdown
Card Class
###Code
class Card():
def __init__(self, suit, rank):
rank = rank.lower()
rank = rank.capitalize()
suit = (suit.lower()).capitalize()
self.suit = suit
self.rank = rank
self.value = values[rank]
def __str__(self):
return self.rank + " of " + self.suit
two_hearts = Card("HearTs", "tWO")
two_hearts
print(two_hearts)
two_hearts.value
three_clubs = Card("clubs","three")
three_clubs.suit
two_hearts.value < three_clubs.value
###Output
_____no_output_____
###Markdown
Deck Class
###Code
class Deck():
def __init__(self):
self.all_cards = []
for suit in suits:
for rank in ranks:
created_card = Card(suit,rank)
#Create the Card object
self.all_cards.append(created_card)
def shuffle(self):
random.shuffle(self.all_cards)
def deal_one(self):
try:
return self.all_cards.pop()
except:
print("Deck Empty!")
new_deck = Deck()
for obj in new_deck.all_cards:
print(obj)
bottom_card = new_deck.all_cards[-1]
print(bottom_card)
new_deck.shuffle() #All happening to the original list, nothing is returned
bottom_card = new_deck.all_cards[-1]
print(bottom_card)
for obj in new_deck.all_cards:
print(obj)
new_deck = Deck()
new_deck.shuffle()
mycard = new_deck.deal_one()
mycard
print(mycard)
len(new_deck.all_cards)
###Output
_____no_output_____
###Markdown
Player Class
###Code
class Player():
def __init__(self, name):
self.name = name
self.all_cards = []
def remove_one(self): #removing from the top
return self.all_cards.pop()
def add_cards(self, new_cards): #adding at the bottom.
if type(new_cards) == type([]):
self.all_cards.extend(new_cards)
else:
self.all_cards.append(new_cards)
#A method always has direct access to the class attributes
def __str__(self):
return f'Player {self.name} has {len(self.all_cards)} cards.'
new_player = Player("BA")
print(new_player)
print(mycard)
new_player.add_cards(mycard)
print(new_player)
print(new_player.all_cards[0])
new_player.add_cards([mycard,mycard,mycard])
print(new_player)
new_player.remove_one()
print(new_player)
###Output
Player BA has 3 cards.
###Markdown
Game Logic
###Code
from colorama import Fore
print(Fore.RED + "What's up")
def check(player1, player2,step):
if len(player1.all_cards)==0:
print(f'This game was won by Player 2 at step {step}')
return False
elif len(player2.all_cards)==0:
print(f'This game was won by Player 1 at step {step}')
return False
else:
return True
def WAR():
step = 0
player1 = Player(1)
player2 = Player(2)
original_deck = Deck()
original_deck.shuffle()
player1.add_cards(original_deck.all_cards[0:26])
player2.add_cards(original_deck.all_cards[26:53])
print(f'The deck had {len(original_deck.all_cards)} cards')
game_on = True
in_war = False
want_to_continue = 'Y'
while game_on and (want_to_continue == 'y' or want_to_continue == 'Y'):
game_on = check(player1,player2,step)
print("Deal now.")
step += 1
deal1 = player1.remove_one()
deal2 = player2.remove_one()
# print(f'{type(deal1)} and {type(deal2)}')
if deal1.value > deal2.value:
print(f'Player 1 won this round with {deal1} vs {deal2}')
player1.add_cards(deal1)
player1.add_cards(deal2)
elif deal1.value < deal2.value:
print(f'Player 2 won this round with {deal2} vs {deal1}')
player2.add_cards(deal1)
player2.add_cards(deal2)
else:
winner = None
print(Fore.RED + 'WAR!')
print(Fore.BLACK + f'with both getting {deal1} and {deal2}')
deal1_on_war = [deal1]
deal2_on_war = [deal2]
in_war = True
n = 0
while in_war:
n += 1
print('Draw out another pair of cards.')
game_on = check(player1,player2,step)
if not game_on:
print("Can't continue due to insufficient cards.")
return 0
deal1_on_war.append(player1.remove_one())
deal2_on_war.append(player2.remove_one())
if deal1_on_war[n].value == deal2_on_war[n].value:
print(Fore.RED + 'War again!')
print(Fore.BLACK + f'Your cards were {deal1} and {deal2}.')
continue
elif deal1_on_war[n].value != deal2_on_war[n].value:
in_war = False
if deal1_on_war[n].value > deal2_on_war[n].value:
winner = 1
print(f'Player 1 won the war! He gets all the cards {len(deal1_on_war + deal2_on_war)}.')
player1.add_cards(deal1_on_war)
player1.add_cards(deal2_on_war)
else:
winner = 2
print(f'Player 2 won the war! He gets all the cards {len(deal1_on_war + deal2_on_war)}')
player2.add_cards(deal1_on_war)
player2.add_cards(deal2_on_war)
# want_to_continue = input("Do you want to see the next move? (Y/N):")
print(player1, player2)
print("________________________")
game_on = check(player1,player2,step)
WAR()
###Output
The deck had 52 cards
Deal now.
Player 1 won this round with Ace of Hearts vs Two of Diamonds
Player 1 has 27 cards. Player 2 has 25 cards.
________________________
Deal now.
Player 2 won this round with Jack of Clubs vs Two of Diamonds
Player 1 has 26 cards. Player 2 has 26 cards.
________________________
Deal now.
Player 1 won this round with Ace of Hearts vs Jack of Clubs
Player 1 has 27 cards. Player 2 has 25 cards.
________________________
Deal now.
Player 1 won this round with Jack of Clubs vs Two of Diamonds
Player 1 has 28 cards. Player 2 has 24 cards.
________________________
Deal now.
Player 2 won this round with Jack of Spades vs Two of Diamonds
Player 1 has 27 cards. Player 2 has 25 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Jack of Clubs and Jack of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 29 cards. Player 2 has 23 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Diamonds and Two of Hearts
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Eight of Diamonds
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Two of Hearts
Player 1 has 29 cards. Player 2 has 23 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Jack of Spades
Player 1 has 28 cards. Player 2 has 24 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Two of Diamonds
Player 1 has 27 cards. Player 2 has 25 cards.
________________________
Deal now.
Player 1 won this round with Ace of Hearts vs King of Clubs
Player 1 has 28 cards. Player 2 has 24 cards.
________________________
Deal now.
Player 1 won this round with King of Clubs vs Two of Diamonds
Player 1 has 29 cards. Player 2 has 23 cards.
________________________
Deal now.
Player 2 won this round with Jack of Spades vs Two of Diamonds
Player 1 has 28 cards. Player 2 has 24 cards.
________________________
Deal now.
Player 1 won this round with King of Clubs vs Jack of Spades
Player 1 has 29 cards. Player 2 has 23 cards.
________________________
Deal now.
Player 1 won this round with Jack of Spades vs Two of Diamonds
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Diamonds and Two of Hearts
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 1 won this round with Eight of Diamonds vs Three of Clubs
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 2 won this round with Five of Clubs vs Three of Clubs
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 1 won this round with Eight of Diamonds vs Five of Clubs
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Three of Clubs
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Four of Clubs vs Three of Clubs
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Three of Clubs
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Four of Clubs
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Five of Clubs
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Eight of Diamonds
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Two of Hearts
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Jack of Spades
Player 1 has 29 cards. Player 2 has 23 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Two of Diamonds
Player 1 has 28 cards. Player 2 has 24 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting King of Clubs and King of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
Player 2 won this round with Jack of Spades vs Two of Diamonds
Player 1 has 29 cards. Player 2 has 23 cards.
________________________
Deal now.
Player 1 won this round with King of Spades vs Jack of Spades
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
Player 1 won this round with Jack of Spades vs Two of Diamonds
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Diamonds and Two of Hearts
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Eight of Diamonds vs Five of Clubs
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with Eight of Spades vs Three of Clubs
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Eight of Spades vs Four of Clubs
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Eight of Spades vs Five of Clubs
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Eight of Diamonds and Eight of Spades
Draw out another pair of cards.
Player 2 won the war! He gets all the cards 4
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 1 won this round with Jack of Spades vs Five of Clubs
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 2 won this round with Eight of Spades vs Five of Clubs
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 1 won this round with Jack of Spades vs Eight of Spades
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 1 won this round with Eight of Spades vs Five of Clubs
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Two of Hearts
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Eight of Diamonds vs Two of Hearts
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 2 won this round with Eight of Diamonds vs Five of Clubs
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Eight of Spades and Eight of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Two of Hearts
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Four of Clubs vs Two of Hearts
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Two of Hearts
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with Three of Clubs vs Two of Hearts
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Hearts
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 2 won this round with Seven of Diamonds vs Two of Hearts
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with Seven of Diamonds vs Three of Clubs
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Seven of Diamonds vs Four of Clubs
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Seven of Diamonds vs Five of Clubs
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Eight of Diamonds vs Seven of Diamonds
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Seven of Diamonds vs Five of Clubs
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Hearts
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 2 won this round with Queen of Diamonds vs Two of Hearts
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 2 won this round with Queen of Diamonds vs Three of Clubs
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with Queen of Diamonds vs Four of Clubs
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Queen of Diamonds vs Five of Clubs
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Queen of Diamonds vs Seven of Diamonds
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 2 won this round with Queen of Diamonds vs Eight of Diamonds
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 2 won this round with Queen of Diamonds vs Jack of Spades
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 2 won this round with Queen of Diamonds vs Eight of Spades
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
Player 2 won this round with Queen of Diamonds vs Two of Diamonds
Player 1 has 29 cards. Player 2 has 23 cards.
________________________
Deal now.
Player 1 won this round with King of Spades vs Queen of Diamonds
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
Player 1 won this round with Queen of Diamonds vs Two of Diamonds
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 2 won this round with Eight of Spades vs Two of Diamonds
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
Player 1 won this round with Queen of Diamonds vs Eight of Spades
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 1 won this round with Eight of Spades vs Two of Diamonds
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 2 won this round with Jack of Spades vs Two of Diamonds
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 2 won this round with Jack of Spades vs Eight of Spades
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
Player 1 won this round with Queen of Diamonds vs Jack of Spades
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 1 won this round with Jack of Spades vs Eight of Spades
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 1 won this round with Eight of Spades vs Two of Diamonds
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 2 won this round with Eight of Diamonds vs Two of Diamonds
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Eight of Spades and Eight of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Seven of Diamonds vs Two of Diamonds
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Eight of Diamonds vs Seven of Diamonds
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Seven of Diamonds vs Two of Diamonds
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Five of Clubs vs Two of Diamonds
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Seven of Diamonds vs Five of Clubs
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Two of Diamonds
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with Four of Clubs vs Two of Diamonds
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Two of Diamonds
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 2 won this round with Three of Clubs vs Two of Diamonds
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Diamonds
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Diamonds and Two of Hearts
Draw out another pair of cards.
Player 2 won the war! He gets all the cards 4
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with Seven of Clubs vs Four of Clubs
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Seven of Clubs vs Five of Clubs
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Seven of Diamonds and Seven of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Two of Hearts
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 2 won this round with Three of Clubs vs Two of Hearts
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Hearts
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Hearts and Two of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 2 won this round with Three of Spades vs Two of Clubs
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 2 won this round with Three of Spades vs Two of Diamonds
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Three of Clubs and Three of Spades
Draw out another pair of cards.
[31mWar again!
[30mYour cards were Three of Clubs and Three of Spades.
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 6.
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Two of Clubs
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Two of Diamonds
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Three of Spades
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Four of Clubs
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Two of Hearts
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Three of Clubs
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Five of Clubs
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Seven of Clubs
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Eight of Diamonds
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Seven of Diamonds
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 1 won this round with Jack of Spades vs Nine of Spades
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Nine of Spades vs Seven of Diamonds
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Eight of Diamonds vs Seven of Diamonds
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Nine of Spades vs Eight of Diamonds
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Eight of Diamonds vs Seven of Diamonds
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Seven of Diamonds and Seven of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Three of Clubs
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Hearts
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 2 won this round with Four of Clubs vs Two of Hearts
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 2 won this round with Four of Clubs vs Three of Clubs
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Hearts
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 2 won this round with Three of Spades vs Two of Hearts
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Three of Clubs and Three of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Hearts and Two of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Two of Clubs
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Two of Diamonds
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Three of Spades
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Two of Hearts
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Four of Clubs
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Three of Clubs
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Five of Clubs
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Seven of Clubs
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Eight of Diamonds
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Seven of Diamonds
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Nine of Spades
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Jack of Spades
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 2 won this round with Queen of Spades vs Eight of Spades
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Queen of Diamonds and Queen of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 2 won this round with Jack of Spades vs Eight of Spades
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 1 won this round with Queen of Spades vs Jack of Spades
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 1 won this round with Jack of Spades vs Eight of Spades
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Eight of Spades
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 1 won this round with Jack of Spades vs Nine of Spades
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Nine of Spades vs Eight of Spades
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Eight of Spades vs Seven of Diamonds
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Eight of Diamonds vs Seven of Diamonds
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Eight of Spades and Eight of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Seven of Diamonds and Seven of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Three of Clubs
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 2 won this round with Four of Clubs vs Three of Clubs
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Hearts
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 2 won this round with Three of Spades vs Two of Hearts
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Three of Clubs and Three of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Hearts and Two of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Two of Clubs
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Two of Diamonds
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Three of Spades
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Two of Hearts
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Four of Clubs
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Three of Clubs
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Five of Clubs
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Seven of Clubs
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Eight of Diamonds
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Seven of Diamonds
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Nine of Spades
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Eight of Spades
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Jack of Spades
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Queen of Spades
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting King of Spades and King of Diamonds
Draw out another pair of cards.
[31mWar again!
[30mYour cards were King of Spades and King of Diamonds.
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 6.
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Jack of Spades vs Eight of Spades
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Eight of Spades
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Jack of Spades vs Nine of Spades
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Nine of Spades vs Eight of Spades
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 1 won this round with Eight of Spades vs Seven of Diamonds
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with Eight of Diamonds vs Seven of Diamonds
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Eight of Spades and Eight of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Seven of Diamonds and Seven of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Three of Clubs
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 2 won this round with Four of Clubs vs Three of Clubs
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Hearts
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 2 won this round with Three of Spades vs Two of Hearts
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Three of Clubs and Three of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Hearts and Two of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 45 cards. Player 2 has 7 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Two of Clubs
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Two of Diamonds
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Three of Spades
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Two of Hearts
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Four of Clubs
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Three of Clubs
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Five of Clubs
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Seven of Clubs
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Eight of Diamonds
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Seven of Diamonds
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Nine of Spades
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Ten of Diamonds vs Eight of Spades
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 1 won this round with Jack of Spades vs Ten of Diamonds
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Ten of Diamonds vs Eight of Spades
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Nine of Spades vs Eight of Spades
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 1 won this round with Ten of Diamonds vs Nine of Spades
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 1 won this round with Nine of Spades vs Eight of Spades
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 1 won this round with Eight of Spades vs Seven of Diamonds
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 2 won this round with Eight of Diamonds vs Seven of Diamonds
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Eight of Spades and Eight of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Seven of Diamonds and Seven of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Three of Clubs
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 2 won this round with Four of Clubs vs Three of Clubs
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Hearts
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 2 won this round with Three of Spades vs Two of Hearts
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Three of Clubs and Three of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Hearts and Two of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 46 cards. Player 2 has 6 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Clubs and Two of Spades
Draw out another pair of cards.
Player 2 won the war! He gets all the cards 4
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
Player 2 won this round with Six of Spades vs Three of Spades
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 2 won this round with Six of Spades vs Two of Hearts
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 2 won this round with Six of Spades vs Four of Clubs
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 2 won this round with Six of Spades vs Three of Clubs
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 2 won this round with Six of Spades vs Five of Clubs
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 1 won this round with Seven of Clubs vs Six of Spades
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 1 won this round with Six of Spades vs Five of Clubs
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Three of Clubs
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 2 won this round with Four of Clubs vs Three of Clubs
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Hearts
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
Player 2 won this round with Three of Spades vs Two of Hearts
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Three of Clubs and Three of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 45 cards. Player 2 has 7 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Hearts and Two of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 47 cards. Player 2 has 5 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Diamonds and Two of Clubs
Draw out another pair of cards.
Player 2 won the war! He gets all the cards 4
Player 1 has 45 cards. Player 2 has 7 cards.
________________________
Deal now.
Player 2 won this round with Nine of Diamonds vs Three of Spades
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
Player 2 won this round with Nine of Diamonds vs Two of Hearts
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 2 won this round with Nine of Diamonds vs Four of Clubs
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 2 won this round with Nine of Diamonds vs Three of Clubs
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 2 won this round with Nine of Diamonds vs Five of Clubs
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 2 won this round with Nine of Diamonds vs Six of Spades
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 2 won this round with Nine of Diamonds vs Seven of Clubs
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 2 won this round with Nine of Diamonds vs Eight of Diamonds
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 2 won this round with Nine of Diamonds vs Seven of Diamonds
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Nine of Spades and Nine of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 2 won this round with Eight of Diamonds vs Seven of Diamonds
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 1 won this round with Nine of Diamonds vs Eight of Diamonds
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 1 won this round with Eight of Diamonds vs Seven of Diamonds
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Seven of Diamonds and Seven of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 1 won this round with Six of Spades vs Five of Clubs
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Three of Clubs
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 2 won this round with Four of Clubs vs Three of Clubs
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Hearts
Player 1 has 45 cards. Player 2 has 7 cards.
________________________
Deal now.
Player 2 won this round with Three of Spades vs Two of Hearts
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Three of Clubs and Three of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 46 cards. Player 2 has 6 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Hearts and Two of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 48 cards. Player 2 has 4 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Spades and Two of Diamonds
Draw out another pair of cards.
Player 2 won the war! He gets all the cards 4
Player 1 has 46 cards. Player 2 has 6 cards.
________________________
Deal now.
Player 2 won this round with Six of Hearts vs Three of Spades
Player 1 has 45 cards. Player 2 has 7 cards.
________________________
Deal now.
Player 2 won this round with Six of Hearts vs Two of Hearts
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
Player 2 won this round with Six of Hearts vs Four of Clubs
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 2 won this round with Six of Hearts vs Three of Clubs
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 2 won this round with Six of Hearts vs Five of Clubs
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Six of Spades and Six of Hearts
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Three of Clubs
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
Player 2 won this round with Four of Clubs vs Three of Clubs
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 1 won this round with Five of Clubs vs Four of Clubs
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
Player 1 won this round with Four of Clubs vs Three of Clubs
Player 1 has 45 cards. Player 2 has 7 cards.
________________________
Deal now.
Player 1 won this round with Three of Clubs vs Two of Hearts
Player 1 has 46 cards. Player 2 has 6 cards.
________________________
Deal now.
Player 2 won this round with Three of Spades vs Two of Hearts
Player 1 has 45 cards. Player 2 has 7 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Three of Clubs and Three of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 47 cards. Player 2 has 5 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Hearts and Two of Diamonds
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 49 cards. Player 2 has 3 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Clubs and Two of Spades
Draw out another pair of cards.
Player 2 won the war! He gets all the cards 4
Player 1 has 47 cards. Player 2 has 5 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Three of Spades and Three of Hearts
Draw out another pair of cards.
[31mWar again!
[30mYour cards were Three of Spades and Three of Hearts.
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 6.
Player 1 has 50 cards. Player 2 has 2 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Two of Diamonds and Two of Clubs
Draw out another pair of cards.
Player 2 won the war! He gets all the cards 4
Player 1 has 48 cards. Player 2 has 4 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Three of Hearts
Player 1 has 47 cards. Player 2 has 5 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Four of Clubs
Player 1 has 46 cards. Player 2 has 6 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Two of Hearts
Player 1 has 45 cards. Player 2 has 7 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Three of Spades
Player 1 has 44 cards. Player 2 has 8 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Three of Clubs
Player 1 has 43 cards. Player 2 has 9 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Five of Clubs
Player 1 has 42 cards. Player 2 has 10 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Six of Hearts
Player 1 has 41 cards. Player 2 has 11 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Seven of Clubs
Player 1 has 40 cards. Player 2 has 12 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Six of Spades
Player 1 has 39 cards. Player 2 has 13 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Eight of Diamonds
Player 1 has 38 cards. Player 2 has 14 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Seven of Diamonds
Player 1 has 37 cards. Player 2 has 15 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Nine of Diamonds
Player 1 has 36 cards. Player 2 has 16 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Eight of Spades
Player 1 has 35 cards. Player 2 has 17 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Nine of Spades
Player 1 has 34 cards. Player 2 has 18 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Ten of Diamonds
Player 1 has 33 cards. Player 2 has 19 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Jack of Spades
Player 1 has 32 cards. Player 2 has 20 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Queen of Spades
Player 1 has 31 cards. Player 2 has 21 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs King of Diamonds
Player 1 has 30 cards. Player 2 has 22 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Ace of Hearts and Ace of Spades
Draw out another pair of cards.
Player 2 won the war! He gets all the cards 4
Player 1 has 28 cards. Player 2 has 24 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting King of Spades and King of Diamonds
Draw out another pair of cards.
Player 2 won the war! He gets all the cards 4
Player 1 has 26 cards. Player 2 has 26 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Jack of Clubs
Player 1 has 25 cards. Player 2 has 27 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Nine of Clubs
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Jack of Hearts
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Six of Diamonds
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Ten of Clubs
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Four of Spades
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Ten of Hearts
Player 1 has 19 cards. Player 2 has 33 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Five of Hearts
Player 1 has 18 cards. Player 2 has 34 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Five of Spades
Player 1 has 17 cards. Player 2 has 35 cards.
________________________
Deal now.
Player 2 won this round with Ace of Spades vs Nine of Hearts
Player 1 has 16 cards. Player 2 has 36 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Ace of Diamonds and Ace of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 18 cards. Player 2 has 34 cards.
________________________
Deal now.
Player 1 won this round with Nine of Hearts vs Five of Spades
Player 1 has 19 cards. Player 2 has 33 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Five of Spades and Five of Hearts
Draw out another pair of cards.
Player 2 won the war! He gets all the cards 4
Player 1 has 17 cards. Player 2 has 35 cards.
________________________
Deal now.
Player 1 won this round with Ace of Spades vs Ten of Hearts
Player 1 has 18 cards. Player 2 has 34 cards.
________________________
Deal now.
Player 1 won this round with Ten of Hearts vs Five of Hearts
Player 1 has 19 cards. Player 2 has 33 cards.
________________________
Deal now.
Player 2 won this round with Nine of Hearts vs Five of Hearts
Player 1 has 18 cards. Player 2 has 34 cards.
________________________
Deal now.
Player 1 won this round with Ten of Hearts vs Nine of Hearts
Player 1 has 19 cards. Player 2 has 33 cards.
________________________
Deal now.
Player 1 won this round with Nine of Hearts vs Five of Hearts
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Five of Hearts and Five of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 2 won this round with Ten of Clubs vs Four of Spades
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 2 won this round with Ten of Clubs vs Five of Spades
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
Player 2 won this round with Ten of Clubs vs Nine of Hearts
Player 1 has 19 cards. Player 2 has 33 cards.
________________________
Deal now.
Player 2 won this round with Ten of Clubs vs Five of Hearts
Player 1 has 18 cards. Player 2 has 34 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Ten of Hearts and Ten of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
Player 2 won this round with Nine of Hearts vs Five of Hearts
Player 1 has 19 cards. Player 2 has 33 cards.
________________________
Deal now.
Player 1 won this round with Ten of Clubs vs Nine of Hearts
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
Player 1 won this round with Nine of Hearts vs Five of Hearts
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Five of Hearts and Five of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 2 won this round with Six of Diamonds vs Four of Spades
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 2 won this round with Six of Diamonds vs Five of Spades
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 1 won this round with Nine of Hearts vs Six of Diamonds
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 1 won this round with Six of Diamonds vs Five of Spades
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 1 won this round with Five of Spades vs Four of Spades
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
Player 2 won this round with Jack of Hearts vs Four of Spades
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 2 won this round with Jack of Hearts vs Five of Spades
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 2 won this round with Jack of Hearts vs Six of Diamonds
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 2 won this round with Jack of Hearts vs Nine of Hearts
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
Player 2 won this round with Jack of Hearts vs Five of Hearts
Player 1 has 19 cards. Player 2 has 33 cards.
________________________
Deal now.
Player 2 won this round with Jack of Hearts vs Ten of Clubs
Player 1 has 18 cards. Player 2 has 34 cards.
________________________
Deal now.
Player 1 won this round with Ace of Spades vs Jack of Hearts
Player 1 has 19 cards. Player 2 has 33 cards.
________________________
Deal now.
Player 1 won this round with Jack of Hearts vs Ten of Clubs
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
Player 1 won this round with Ten of Clubs vs Five of Hearts
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 2 won this round with Nine of Hearts vs Five of Hearts
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
Player 1 won this round with Ten of Clubs vs Nine of Hearts
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 1 won this round with Nine of Hearts vs Five of Hearts
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 2 won this round with Six of Diamonds vs Five of Hearts
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 1 won this round with Nine of Hearts vs Six of Diamonds
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 1 won this round with Six of Diamonds vs Five of Hearts
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Five of Hearts and Five of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 25 cards. Player 2 has 27 cards.
________________________
Deal now.
Player 2 won this round with Nine of Clubs vs Four of Spades
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
Player 2 won this round with Nine of Clubs vs Five of Spades
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 2 won this round with Nine of Clubs vs Six of Diamonds
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 2 won this round with Nine of Clubs vs Five of Hearts
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Nine of Hearts and Nine of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 2 won this round with Six of Diamonds vs Five of Hearts
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 1 won this round with Nine of Clubs vs Six of Diamonds
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 1 won this round with Six of Diamonds vs Five of Hearts
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Five of Hearts and Five of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 26 cards. Player 2 has 26 cards.
________________________
Deal now.
Player 2 won this round with Jack of Clubs vs Four of Spades
Player 1 has 25 cards. Player 2 has 27 cards.
________________________
Deal now.
Player 2 won this round with Jack of Clubs vs Five of Spades
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
Player 2 won this round with Jack of Clubs vs Six of Diamonds
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 2 won this round with Jack of Clubs vs Five of Hearts
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 2 won this round with Jack of Clubs vs Nine of Clubs
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 2 won this round with Jack of Clubs vs Ten of Clubs
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
Player 2 won this round with Jack of Clubs vs Nine of Hearts
Player 1 has 19 cards. Player 2 has 33 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Jack of Hearts and Jack of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 2 won this round with Ten of Clubs vs Nine of Hearts
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
Player 1 won this round with Jack of Clubs vs Ten of Clubs
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 1 won this round with Ten of Clubs vs Nine of Hearts
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Nine of Hearts and Nine of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
Player 2 won this round with Six of Diamonds vs Five of Hearts
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 1 won this round with Nine of Clubs vs Six of Diamonds
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
Player 1 won this round with Six of Diamonds vs Five of Hearts
Player 1 has 25 cards. Player 2 has 27 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Five of Hearts and Five of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 27 cards. Player 2 has 25 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Four of Spades
Player 1 has 26 cards. Player 2 has 26 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Five of Spades
Player 1 has 25 cards. Player 2 has 27 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Six of Diamonds
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Five of Hearts
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Nine of Clubs
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Ten of Clubs
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Nine of Hearts
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
Player 2 won this round with King of Diamonds vs Jack of Clubs
Player 1 has 19 cards. Player 2 has 33 cards.
________________________
Deal now.
Player 1 won this round with Ace of Spades vs King of Diamonds
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
Player 1 won this round with King of Diamonds vs Jack of Clubs
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 1 won this round with Jack of Clubs vs Nine of Hearts
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 2 won this round with Ten of Clubs vs Nine of Hearts
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 1 won this round with Jack of Clubs vs Ten of Clubs
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 1 won this round with Ten of Clubs vs Nine of Hearts
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Nine of Hearts and Nine of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 25 cards. Player 2 has 27 cards.
________________________
Deal now.
Player 2 won this round with Six of Diamonds vs Five of Hearts
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
Player 1 won this round with Nine of Clubs vs Six of Diamonds
Player 1 has 25 cards. Player 2 has 27 cards.
________________________
Deal now.
Player 1 won this round with Six of Diamonds vs Five of Hearts
Player 1 has 26 cards. Player 2 has 26 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Five of Hearts and Five of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 28 cards. Player 2 has 24 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Four of Spades
Player 1 has 27 cards. Player 2 has 25 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Five of Spades
Player 1 has 26 cards. Player 2 has 26 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Six of Diamonds
Player 1 has 25 cards. Player 2 has 27 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Five of Hearts
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Nine of Clubs
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Ten of Clubs
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Nine of Hearts
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
Player 2 won this round with King of Clubs vs Jack of Clubs
Player 1 has 20 cards. Player 2 has 32 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting King of Diamonds and King of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 1 won this round with Jack of Clubs vs Nine of Hearts
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 2 won this round with Ten of Clubs vs Nine of Hearts
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 1 won this round with Jack of Clubs vs Ten of Clubs
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 1 won this round with Ten of Clubs vs Nine of Hearts
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Nine of Hearts and Nine of Clubs
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 26 cards. Player 2 has 26 cards.
________________________
Deal now.
Player 2 won this round with Six of Diamonds vs Five of Hearts
Player 1 has 25 cards. Player 2 has 27 cards.
________________________
Deal now.
Player 1 won this round with Nine of Clubs vs Six of Diamonds
Player 1 has 26 cards. Player 2 has 26 cards.
________________________
Deal now.
Player 1 won this round with Six of Diamonds vs Five of Hearts
Player 1 has 27 cards. Player 2 has 25 cards.
________________________
Deal now.
[31mWAR!
[30mwith both getting Five of Hearts and Five of Spades
Draw out another pair of cards.
Player 1 won the war! He gets all the cards 4.
Player 1 has 29 cards. Player 2 has 23 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Four of Spades
Player 1 has 28 cards. Player 2 has 24 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Five of Spades
Player 1 has 27 cards. Player 2 has 25 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Six of Diamonds
Player 1 has 26 cards. Player 2 has 26 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Five of Hearts
Player 1 has 25 cards. Player 2 has 27 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Nine of Clubs
Player 1 has 24 cards. Player 2 has 28 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Ten of Clubs
Player 1 has 23 cards. Player 2 has 29 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Nine of Hearts
Player 1 has 22 cards. Player 2 has 30 cards.
________________________
Deal now.
Player 2 won this round with King of Spades vs Jack of Clubs
Player 1 has 21 cards. Player 2 has 31 cards.
________________________
Deal now.
|
01_120_Python_Basics_Interview_Questions.ipynb | ###Markdown
All the IPython Notebooks in **Data Science Interview Questions** lecture series by **[Dr. Milaan Parmar](https://www.linkedin.com/in/milaanparmar/)** are available @ **[GitHub](https://github.com/milaan9/DataScience_Interview_Questions)** Python Basics ➞ 120 Questions 1. What is Python?Solution- Python is a high-level, interpreted, interactive and object-oriented scripting language. Python is designed to be highly readable. It uses English keywords frequently where as other languages use punctuation, and it has fewer syntactical constructions than other languages. 2. Name some of the features of Python.SolutionFollowing are some of the salient features of python −* It supports functional and structured programming methods as well as OOP.* It can be used as a scripting language or can be compiled to byte-code for building large applications.* It provides very high-level dynamic data types and supports dynamic type checking.* It supports automatic garbage collection.* It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java. 3. What is the purpose of PYTHONPATH environment variable?Solution- PYTHONPATH - It has a role similar to PATH. This variable tells the Python interpreter where to locate the module files imported into a program. It should include the Python source library directory and the directories containing Python source code. PYTHONPATH is sometimes preset by the Python installer. 4. What is the purpose of PYTHONSTARTUP environment variable?Solution- PYTHONSTARTUP - It contains the path of an initialization file containing Python source code. It is executed every time you start the interpreter. It is named as .pythonrc.py in Unix and it contains commands that load utilities or modify PYTHONPATH. 5. What is the purpose of PYTHONCASEOK environment variable?Solution- PYTHONCASEOK − It is used in Windows to instruct Python to find the first case-insensitive match in an import statement. Set this variable to any value to activate it. 6. What is the purpose of PYTHONHOME environment variable?Solution- PYTHONHOME − It is an alternative module search path. It is usually embedded in the PYTHONSTARTUP or PYTHONPATH directories to make switching module libraries easy. 7. Is python a case sensitive language?Solution- Yes! Python is a case sensitive programming language. 8. What are the supported data types in Python?Solution- Python has five standard data types: 1. Numbers 2. String 3. List 4. Tuple 5. Dictionary 9. What is the output of print `str` if `str = 'Hello World!'`?Solution- It will print complete string. - Output would be `Hello World!` 10. What is the output of print `str[0]` if `str = 'Hello World!'`?Solution- It will print first character of the string. Output would be H. 11. What is the output of print `str[2:5]` if `str = 'Hello World!'`?Solution- It will print characters starting from 3rd to 5th. - Output would be `llo` 12. What is the output of print `str[2:]` if `str = 'Hello World!'`?Solution- It will print characters starting from 3rd character. - Output would be `llo World!` 13. What is the output of print `str * 2` if `str = 'Hello World!'`?Solution- It will print string two times. - Output would be `Hello World!Hello World!` 14. What is the output of print `str + "TEST"` if `str = 'Hello World!'`?Solution- It will print concatenated string. - Output would be `Hello World!TEST` 15. What is the output of print `list` if `list = [ 'abcd', 786 , 2.23, 'john', 70.2 ]`?Solution- It will print complete list. - Output would be `['abcd', 786, 2.23, 'john', 70.200000000000003]` 16. What is the output of print `list[0]` if `list = [ 'abcd', 786 , 2.23, 'john', 70.2 ]`?Solution- It will print first element of the list. - Output would be `abcd` 17. What is the output of print `list[1:3]` if `list = [ 'abcd', 786 , 2.23, 'john', 70.2 ]`?Solution- It will print elements starting from 2nd till 3rd. - Output would be `[786, 2.23]` 18. What is the output of print `list[2:]` if `list = [ 'abcd', 786 , 2.23, 'john', 70.2 ]`?Solution- It will print elements starting from 3rd element. - Output would be `[2.23, 'john', 70.200000000000003]` 19. What is the output of print `tinylist * 2` if `tinylist = [123, 'john']`?Solution- It will print list two times. - Output would be `[123, 'john', 123, 'john']` 20. What is the output of print `list1 + list2`, if `list1 = [ 'abcd', 786 , 2.23, 'john', 70.2 ] and ist2 = [123, 'john']`?Solution- It will print concatenated lists. - Output would be `['abcd', 786, 2.23, 'john', 70.2, 123, 'john']` 21. What are tuples in Python?Solution- A tuple is another sequence data type that is similar to the list. - A tuple consists of a number of values separated by commas. - Unlike lists, however, tuples are enclosed within parentheses. 22. What is the difference between tuples and lists in Python?Solution- The main differences between lists and tuples are: - Lists are enclosed in brackets `[ ]` and their elements and size can be changed, while tuples are enclosed in parentheses `( )` and cannot be updated. - Tuples can be thought of as read-only lists. 23. What is the output of print `tuple` if `tuple = ( 'abcd', 786 , 2.23, 'john', 70.2 )`?Solution- It will print complete tuple. - Output would be `('abcd', 786, 2.23, 'john', 70.200000000000003)` 24. What is the output of print `tuple[0]` if `tuple = ( 'abcd', 786 , 2.23, 'john', 70.2 )`?Solution- It will print first element of the tuple. - Output would be `abcd` 25. What is the output of print `tuple[1:3]` if `tuple = ( 'abcd', 786 , 2.23, 'john', 70.2 )`?Solution- It will print elements starting from 2nd till 3rd. - Output would be `(786, 2.23)` 26. What is the output of print `tuple[2:]` if `tuple = ( 'abcd', 786 , 2.23, 'john', 70.2 )`?Solution- It will print elements starting from 3rd element. - Output would be `(2.23, 'john', 70.200000000000003)` 27. What is the output of print `tinytuple * 2` if `tinytuple = (123, 'john')`?Solution- It will print tuple two times. - Output would be `(123, 'john', 123, 'john')` 28. What is the output of print `tuple + tinytuple` if `tuple = ( 'abcd', 786, 2.23, 'john', 70.2 )` and `tinytuple = (123, 'john')`?Solution- It will print concatenated tuples. - Output would be `('abcd', 786, 2.23, 'john', 70.200000000000003, 123, 'john')` 29. What are Python's dictionaries?Solution- Python's dictionaries are kind of hash table type. - They work like associative arrays or hashes found in Perl and consist of key-value pairs. - A dictionary key can be almost any Python type, but are usually numbers or strings. - Values, on the other hand, can be any arbitrary Python object. 30. How will you create a dictionary in python?Solution- Dictionaries are enclosed by curly braces `{ }` and values can be assigned and accessed using square braces `[]`.```pythondict = {}dict['one'] = "This is one"dict[2] = "This is two"tinydict = {'name': 'john','code':6734, 'dept': 'sales'}``` 31. How will you get all the keys from the dictionary?Solution- Using `dictionary.keys()` function, we can get all the keys from the dictionary object.```pythonprint dict.keys() Prints all the keys``` 32. How will you get all the values from the dictionary?Solution- Using `dictionary.values()` function, we can get all the values from the dictionary object.```pythonprint dict.values() Prints all the values``` 33. How will you convert a string to an int in python?Solution- `int(x [,base])` - Converts `x` to an integer. `base` specifies the base if `x` is a string. 34. How will you convert a string to a long in python?Solution- `long(x [,base] )` - Converts `x` to a long integer. `base` specifies the base if `x` is a string. 35. How will you convert a string to a float in python?Solution- `float(x)` − Converts `x` to a floating-point number. 36. How will you convert a object to a string in python?Solution- `str(x)` − Converts object `x` to a string representation. 37. How will you convert a object to a regular expression in python?Solution- `repr(x)` − Converts object `x` to an expression string. 38. How will you convert a String to an object in python?Solution- `eval(str)` − Evaluates a string and returns an object. 39. How will you convert a string to a tuple in python?Solution- `tuple(s)` − Converts `s` to a tuple. 40. How will you convert a string to a list in python?Solution- `list(s)` − Converts `s` to a list. 41. How will you convert a string to a set in python?Solution- `set(s)` − Converts `s` to a set. 42. How will you create a dictionary using tuples in python?Solution- `dict(d)` − Creates a dictionary. `d` must be a sequence of (key,value) tuples. 43. How will you convert a string to a frozen set in python?Solution- `frozenset(s)` − Converts `s` to a frozen set. 44. How will you convert an integer to a character in python?Solution- `chr(x)` − Converts an integer to a character. 45. How will you convert an integer to an unicode character in python?Solution- `unichr(x)` − Converts an integer to a Unicode character. 46. How will you convert a single character to its integer value in python?Solution- `ord(x)` − Converts a single character to its integer value. 47. How will you convert an integer to hexadecimal string in python?Solution- `hex(x)` − Converts an integer to a hexadecimal string. 48. How will you convert an integer to octal string in python?Solution- `oct(x)` − Converts an integer to an octal string. 49. What is the purpose of `**` operator?Solution- `**` Exponent − Performs exponential (power) calculation on operators. - `a**b` = 10 to the power 20 if `a = 10` and `b = 20` 50. What is the purpose of `//` operator?Solution- `//` Floor Division − The division of operands where the result is the quotient in which the digits after the decimal point are removed. 51. What is the purpose of `is` operator?Solution- `is` − Evaluates to `True` if the variables on either side of the operator point to the same object and false otherwise. `x` is `y`, here is results in 1 if `id(x)` equals `id(y)`. 52. What is the purpose of `not in` operator?Solution- `not in` − Evaluates to `True` if it does not finds a variable in the specified sequence and false otherwise. `x` not in `y`, here not in results in a 1 if `x` is not a member of sequence `y`. 53. What is the purpose `break` statement in python?Solution- `break` statement − Terminates the loop statement and transfers execution to the statement immediately following the loop. 54. What is the purpose `continue` statement in python?Solution- `continue` statement − Causes the loop to skip the remainder of its body and immediately retest its condition prior to reiterating. 55. What is the purpose `pass` statement in python?Solution- `pass` statement − The `pass` statement in Python is used when a statement is required syntactically but you do not want any command or code to execute. 56. How can you pick a random item from a list or tuple?Solution- `choice(seq)` − Returns a random item from a list, tuple, or string. 57. How can you pick a random item from a range?Solution- `randrange ([start,] stop [,step])` − returns a randomly selected element from range(start, stop, step). 58. How can you get a random number in python?Solution- `random()` − returns a random float `r`, such that 0 is less than or equal to `r` and `r` is less than 1. 59. How will you set the starting value in generating random numbers?Solution- `seed([x])` − Sets the integer starting value used in generating random numbers. Call this function before calling any other random module function. Returns `None`. 60. How will you randomizes the items of a list in place?Solution- `shuffle(lst)` − Randomizes the items of a list in place. Returns `None`. 61. How will you capitalizes first letter of string?Solution- `capitalize()` − Capitalizes first letter of string. 62. How will you check in a string that all characters are alphanumeric?Solution- `isalnum()` − Returns `True` if string has at least 1 character and all characters are alphanumeric and `False` otherwise. 63. How will you check in a string that all characters are digits?Solution- `isdigit()` − Returns `True` if string contains only digits and `False` otherwise. 64. How will you check in a string that all characters are in lowercase?Solution- `islower()` − Returns `True` if string has at least 1 cased character and all cased characters are in lowercase and `False` otherwise. 65. How will you check in a string that all characters are numerics?Solution- `isnumeric()` − Returns `True` if a unicode string contains only numeric characters and `False` otherwise. 66. How will you check in a string that all characters are whitespaces?Solution- `isspace()` − Returns `True` if string contains only whitespace characters and `False` otherwise. 67. How will you check in a string that it is properly titlecased?Solution- `istitle()` − Returns `True` if string is properly "titlecased" and `False` otherwise. 68. How will you check in a string that all characters are in uppercase?Solution- `isupper()` − Returns `True` if string has at least one cased character and all cased characters are in uppercase and `False` otherwise. 69. How will you merge elements in a sequence?Solution- `join(seq)` − Merges (concatenates) the string representations of elements in sequence `seq` into a string, with separator string. 70. How will you get the length of the string?Solution- `len(string)` − Returns the length of the string. 71. How will you get a space-padded string with the original string left-justified to a total of width columns?Solution- `ljust(width[, fillchar])` − Returns a space-padded string with the original string left-justified to a total of width columns. 72. How will you convert a string to all lowercase?Solution- `lower()` − Converts all uppercase letters in string to lowercase. 73. How will you remove all leading whitespace in string?Solution- `lstrip()` − Removes all leading whitespace in string. 74. How will you get the max alphabetical character from the string?Solution- `max(str)` − Returns the `max` alphabetical character from the string `str`. 75. How will you get the min alphabetical character from the string?Solution- ``min(str)` − Returns the `min` alphabetical character from the string `str`. 76. How will you replaces all occurrences of old substring in string with new string?Solution- `replace(old, new [, max])` − Replaces all occurrences of old in string with new or at most max occurrences if `max` given. 77. How will you remove all leading and trailing whitespace in string?Solution- `strip([chars])` − Performs both `lstrip()` and `rstrip()` on string. 78. How will you change case for all letters in string?Solution- `swapcase()` − Inverts case for all letters in string. 79. How will you get titlecased version of string?Solution- `title()` − Returns "titlecased" version of string, that is, all words begin with uppercase and the rest are lowercase. 80. How will you convert a string to all uppercase?Solution- `upper()` − Converts all lowercase letters in string to uppercase. 81. How will you check in a string that all characters are decimal?Solution- `isdecimal()` − Returns `True` if a unicode string contains only decimal characters and `False` otherwise. 82. What is the difference between `del()` and `remove()` methods of list?Solution- To remove a list element, you can use either the `del` statement if you know exactly which element(s) you are deleting or the `remove()` method if you do not know. 83. What is the output of `len([1, 2, 3])`?Solution- `3` 84. What is the output of `[1, 2, 3] + [4, 5, 6]`?Solution- `[1, 2, 3, 4, 5, 6]` 85. What is the output of `['Hi!'] * 4`?Solution- `['Hi!', 'Hi!', 'Hi!', 'Hi!']` 86. What is the output of 3 in `[1, 2, 3]`?Solution- `True` 87. What is the output of for `x in [1, 2, 3]: print x`?Solution```python123``` 88. What is the output of `L[2]` if `L = [1,2,3]`?Solution- `3`, Offsets start at zero. 89. What is the output of `L[-2]` if `L = [1,2,3]`?Solution- `1`, Negative: count from the right. 90. What is the output of `L[1:]` if `L = [1,2,3]`?Solution- `2, 3`, Slicing fetches sections. 91. How will you compare two lists?Solution- `cmp(list1, list2)` − Compares elements of both lists. 92. How will you get the length of a list?Solution- `len(list)` − Gives the total length of the list. 93. How will you get the max valued item of a list?Solution- `max(list)` − Returns item from the list with max value. 94. How will you get the min valued item of a list?Solution- `min(list)` − Returns item from the list with min value. 95. How will you get the index of an object in a list?Solution- `list.index(obj)` − Returns the lowest index in list that `obj` appears. 96. How will you insert an object at given index in a list?Solution- `list.insert(index, obj)` − Inserts object `obj` into list at offset index. 97. How will you remove last object from a list?Solution`list.pop(obj=list[-1])` − Removes and returns last object or obj from list. 98. How will you remove an object from a list?Solution- `list.remove(obj)` − Removes object `obj` from list. 99. How will you reverse a list?Solution- `list.reverse()` − Reverses objects of list in place. 100. How will you sort a list?Solution- `list.sort([func])` − Sorts objects of list, use compare `func` if given. 101. What is lambda function in python?Solution- `‘lambda’` is a keyword in python which creates an anonymous function. Lambda does not contain block of statements. It does not contain return statements. 102. What we call a function which is incomplete version of a function?Solution- `Stub`. 103. When a function is defined then the system stores parameters and local variables in an area of memory. What this memory is known as?Solution- `Stack`. 104. A canvas can have a foreground color? (Yes/No)Solution- `Yes`. 105. Is Python platform independent?Solution- No. There are some modules and functions in python that can only run on certain platforms. 106. Do you think Python has a complier?Solution- Yes. Python complier which works automatically so we don’t notice the compiler of python. 107. What are the applications of Python?Solution1. Django (Web framework of Python).2. Micro Frame work such as Flask and Bottle.3. Plone and Django CMS for advanced content Management. 108. What is the basic difference between Python ver 2 and Python ver 3?Solution- Table below explains the difference between Python version 2 and Python version 3.| S.No | Section | Python Version 2 | Python Version 3 | |:-------|:---------------| :------ |:--------|| 1. | Print Function | Print command can be used without parentheses. | Python 3 needs parentheses to print any string. It will raise error without parentheses. | | 2. | Unicode | ASCII str() types and separate Unicode() but there is no byte type code in Python 2. | Unicode (utf-8) and it has two byte classes − Byte, Bytearray S. || 3. | Exceptions | Python 2 accepts both new and old notations of syntax. | Python 3 raises a SyntaxError in turn when we don’t enclose the exception argument in parentheses. || 4. | Comparing Unorderable | It does not raise any error. | It raises ‘TypeError’ as warning if we try to compare unorderable types. | 109. Which programming Language is an implementation of Python programming language designed to run on Java Platform?Solution- `Jython`. (Jython is successor of Jpython.) 110. Is there any double data type in Python?Solution- `No`. 111. Is String in Python are immutable? (Yes/No)Solution- `Yes`. 112. Can `True = False` be possible in Python?Solution- `No`. 113. Which module of python is used to apply the methods related to OS.?Solution- `OS`. 114. When does a new block begin in python?Solution- A block begins when the line is intended by 4 spaces. 115. Write a function in python which detects whether the given two strings are anagrams or not.Solution
###Code
def check(a,b):
if(len(a)!=len(b)):
return False
else:
if(sorted(list(a)) == sorted(list(b))):
return True
else:
return False
###Output
_____no_output_____
###Markdown
116. Name the python Library used for Machine learning.Solution- Scikit-learn python Library used for Machine learning 117. What does `pass` operation do?Solution- `pass` indicates that nothing is to be done i.e., it signifies a no operation. 118. Name the tools which python uses to find bugs (if any).Solution- `Pylint` and `pychecker`. 119. Write a function to give the sum of all the numbers in list?SolutionSample list − (100, 200, 300, 400, 0, 500)Expected output − 1500
###Code
# Program for sum of all the numbers in list is −
def sum(numbers):
total = 0
for num in numbers:
total+=num
print("Sum of the numbers: ", total)
sum((100, 200, 300, 400, 0, 500))
# We define a function ‘sum’ with numbers as parameter.
#The in for loop we store the sum of all the values of list.
###Output
Sum of the numbers: 1500
###Markdown
120. Write a program in Python to reverse a string without using inbuilt function reverse string?Solution
###Code
# Reverse a string without using reverse() function
def string_reverse(string):
i = len(string) - 1
print ("The length of string is: ", len(string))
sNew = ''
while i >= 0:
sNew = sNew + str(string[i])
i = i -1
return sNew
print(string_reverse("1tniop"))
# First we declare a variable to store the reverse string.
# Then using while loop and indexing of string (index is calculated by string length)
# we reverse the string. While loop starts when index is greater than zero.
# Index is reduced to value 1 each time. When index reaches zero we obtain the reverse of string.
###Output
The length of string is: 6
point1
###Markdown
121. Write a program to test whether the number is in the defined range or not?Solution
###Code
# Program is −
def test_range(num):
if num in range(0, 101):
print("%s is in range"%str(num))
else:
print("%s is not in range"%str(num))
# print("The number is outside the given range.")
test_range(99)
# To test any number in a particular range we make use of the method ‘if..in’ and else condition.
###Output
99 is in range
###Markdown
122. Write a program to calculate number of upper case letters and number of lower case letters?SolutionTest on String: 'The quick Brown Fox'
###Code
# Program is −
def string_test(s):
d={"UPPER_CASE":0, "LOWER_CASE":0}
for c in s:
if c.isupper():
d["UPPER_CASE"]+=1
elif c.islower():
d["LOWER_CASE"]+=1
else:
pass
print ("String in testing is: ", s)
print ("Number of Lower Case characters in String: ", d["UPPER_CASE"])
print ("Number of Upper Case characters in String: ", d["LOWER_CASE"])
string_test('The quick Brown Fox')
# We make use of the methods .isupper() and .islower(). We initialise the count for lower and upper.
# Using if and else condition we calculate total number of lower and upper case characters.
###Output
String in testing is: The quick Brown Fox
Number of Lower Case characters in String: 3
Number of Upper Case characters in String: 13
###Markdown
All the IPython Notebooks in **Data Science Interview Questions** series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/DataScience_Interview_Questions)** Python Basics ➞ 120 Questions 1. What is Python?Solution- Python is a high-level, interpreted, interactive and object-oriented scripting language. Python is designed to be highly readable. It uses English keywords frequently where as other languages use punctuation, and it has fewer syntactical constructions than other languages. 2. Name some of the features of Python.SolutionFollowing are some of the salient features of python −* It supports functional and structured programming methods as well as OOP.* It can be used as a scripting language or can be compiled to byte-code for building large applications.* It provides very high-level dynamic data types and supports dynamic type checking.* It supports automatic garbage collection.* It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java. 3. What is the purpose of PYTHONPATH environment variable?Solution- PYTHONPATH - It has a role similar to PATH. This variable tells the Python interpreter where to locate the module files imported into a program. It should include the Python source library directory and the directories containing Python source code. PYTHONPATH is sometimes preset by the Python installer. 4. What is the purpose of PYTHONSTARTUP environment variable?Solution- PYTHONSTARTUP - It contains the path of an initialization file containing Python source code. It is executed every time you start the interpreter. It is named as .pythonrc.py in Unix and it contains commands that load utilities or modify PYTHONPATH. 5. What is the purpose of PYTHONCASEOK environment variable?Solution- PYTHONCASEOK − It is used in Windows to instruct Python to find the first case-insensitive match in an import statement. Set this variable to any value to activate it. 6. What is the purpose of PYTHONHOME environment variable?Solution- PYTHONHOME − It is an alternative module search path. It is usually embedded in the PYTHONSTARTUP or PYTHONPATH directories to make switching module libraries easy. 7. Is python a case sensitive language?Solution- Yes! Python is a case sensitive programming language. 8. What are the supported data types in Python?Solution- Python has five standard data types: 1. Numbers 2. String 3. List 4. Tuple 5. Dictionary 9. What is the output of print `str` if `str = 'Hello World!'`?Solution- It will print complete string. - Output would be `Hello World!` 10. What is the output of print `str[0]` if `str = 'Hello World!'`?Solution- It will print first character of the string. Output would be H. 11. What is the output of print `str[2:5]` if `str = 'Hello World!'`?Solution- It will print characters starting from 3rd to 5th. - Output would be `llo` 12. What is the output of print `str[2:]` if `str = 'Hello World!'`?Solution- It will print characters starting from 3rd character. - Output would be `llo World!` 13. What is the output of print `str * 2` if `str = 'Hello World!'`?Solution- It will print string two times. - Output would be `Hello World!Hello World!` 14. What is the output of print `str + "TEST"` if `str = 'Hello World!'`?Solution- It will print concatenated string. - Output would be `Hello World!TEST` 15. What is the output of print `list` if `list = [ 'abcd', 786 , 2.23, 'john', 70.2 ]`?Solution- It will print complete list. - Output would be `['abcd', 786, 2.23, 'john', 70.200000000000003]` 16. What is the output of print `list[0]` if `list = [ 'abcd', 786 , 2.23, 'john', 70.2 ]`?Solution- It will print first element of the list. - Output would be `abcd` 17. What is the output of print `list[1:3]` if `list = [ 'abcd', 786 , 2.23, 'john', 70.2 ]`?Solution- It will print elements starting from 2nd till 3rd. - Output would be `[786, 2.23]` 18. What is the output of print `list[2:]` if `list = [ 'abcd', 786 , 2.23, 'john', 70.2 ]`?Solution- It will print elements starting from 3rd element. - Output would be `[2.23, 'john', 70.200000000000003]` 19. What is the output of print `tinylist * 2` if `tinylist = [123, 'john']`?Solution- It will print list two times. - Output would be `[123, 'john', 123, 'john']` 20. What is the output of print `list1 + list2`, if `list1 = [ 'abcd', 786 , 2.23, 'john', 70.2 ] and ist2 = [123, 'john']`?Solution- It will print concatenated lists. - Output would be `['abcd', 786, 2.23, 'john', 70.2, 123, 'john']` 21. What are tuples in Python?Solution- A tuple is another sequence data type that is similar to the list. - A tuple consists of a number of values separated by commas. - Unlike lists, however, tuples are enclosed within parentheses. 22. What is the difference between tuples and lists in Python?Solution- The main differences between lists and tuples are: - Lists are enclosed in brackets `[ ]` and their elements and size can be changed, while tuples are enclosed in parentheses `( )` and cannot be updated. - Tuples can be thought of as read-only lists. 23. What is the output of print `tuple` if `tuple = ( 'abcd', 786 , 2.23, 'john', 70.2 )`?Solution- It will print complete tuple. - Output would be `('abcd', 786, 2.23, 'john', 70.200000000000003)` 24. What is the output of print `tuple[0]` if `tuple = ( 'abcd', 786 , 2.23, 'john', 70.2 )`?Solution- It will print first element of the tuple. - Output would be `abcd` 25. What is the output of print `tuple[1:3]` if `tuple = ( 'abcd', 786 , 2.23, 'john', 70.2 )`?Solution- It will print elements starting from 2nd till 3rd. - Output would be `(786, 2.23)` 26. What is the output of print `tuple[2:]` if `tuple = ( 'abcd', 786 , 2.23, 'john', 70.2 )`?Solution- It will print elements starting from 3rd element. - Output would be `(2.23, 'john', 70.200000000000003)` 27. What is the output of print `tinytuple * 2` if `tinytuple = (123, 'john')`?Solution- It will print tuple two times. - Output would be `(123, 'john', 123, 'john')` 28. What is the output of print `tuple + tinytuple` if `tuple = ( 'abcd', 786, 2.23, 'john', 70.2 )` and `tinytuple = (123, 'john')`?Solution- It will print concatenated tuples. - Output would be `('abcd', 786, 2.23, 'john', 70.200000000000003, 123, 'john')` 29. What are Python's dictionaries?Solution- Python's dictionaries are kind of hash table type. - They work like associative arrays or hashes found in Perl and consist of key-value pairs. - A dictionary key can be almost any Python type, but are usually numbers or strings. - Values, on the other hand, can be any arbitrary Python object. 30. How will you create a dictionary in python?Solution- Dictionaries are enclosed by curly braces `{ }` and values can be assigned and accessed using square braces `[]`.```pythondict = {}dict['one'] = "This is one"dict[2] = "This is two"tinydict = {'name': 'john','code':6734, 'dept': 'sales'}``` 31. How will you get all the keys from the dictionary?Solution- Using `dictionary.keys()` function, we can get all the keys from the dictionary object.```pythonprint dict.keys() Prints all the keys``` 32. How will you get all the values from the dictionary?Solution- Using `dictionary.values()` function, we can get all the values from the dictionary object.```pythonprint dict.values() Prints all the values``` 33. How will you convert a string to an int in python?Solution- `int(x [,base])` - Converts `x` to an integer. `base` specifies the base if `x` is a string. 34. How will you convert a string to a long in python?Solution- `long(x [,base] )` - Converts `x` to a long integer. `base` specifies the base if `x` is a string. 35. How will you convert a string to a float in python?Solution- `float(x)` − Converts `x` to a floating-point number. 36. How will you convert a object to a string in python?Solution- `str(x)` − Converts object `x` to a string representation. 37. How will you convert a object to a regular expression in python?Solution- `repr(x)` − Converts object `x` to an expression string. 38. How will you convert a String to an object in python?Solution- `eval(str)` − Evaluates a string and returns an object. 39. How will you convert a string to a tuple in python?Solution- `tuple(s)` − Converts `s` to a tuple. 40. How will you convert a string to a list in python?Solution- `list(s)` − Converts `s` to a list. 41. How will you convert a string to a set in python?Solution- `set(s)` − Converts `s` to a set. 42. How will you create a dictionary using tuples in python?Solution- `dict(d)` − Creates a dictionary. `d` must be a sequence of (key,value) tuples. 43. How will you convert a string to a frozen set in python?Solution- `frozenset(s)` − Converts `s` to a frozen set. 44. How will you convert an integer to a character in python?Solution- `chr(x)` − Converts an integer to a character. 45. How will you convert an integer to an unicode character in python?Solution- `unichr(x)` − Converts an integer to a Unicode character. 46. How will you convert a single character to its integer value in python?Solution- `ord(x)` − Converts a single character to its integer value. 47. How will you convert an integer to hexadecimal string in python?Solution- `hex(x)` − Converts an integer to a hexadecimal string. 48. How will you convert an integer to octal string in python?Solution- `oct(x)` − Converts an integer to an octal string. 49. What is the purpose of `**` operator?Solution- `**` Exponent − Performs exponential (power) calculation on operators. - `a**b` = 10 to the power 20 if `a = 10` and `b = 20` 50. What is the purpose of `//` operator?Solution- `//` Floor Division − The division of operands where the result is the quotient in which the digits after the decimal point are removed. 51. What is the purpose of `is` operator?Solution- `is` − Evaluates to `True` if the variables on either side of the operator point to the same object and false otherwise. `x` is `y`, here is results in 1 if `id(x)` equals `id(y)`. 52. What is the purpose of `not in` operator?Solution- `not in` − Evaluates to `True` if it does not finds a variable in the specified sequence and false otherwise. `x` not in `y`, here not in results in a 1 if `x` is not a member of sequence `y`. 53. What is the purpose `break` statement in python?Solution- `break` statement − Terminates the loop statement and transfers execution to the statement immediately following the loop. 54. What is the purpose `continue` statement in python?Solution- `continue` statement − Causes the loop to skip the remainder of its body and immediately retest its condition prior to reiterating. 55. What is the purpose `pass` statement in python?Solution- `pass` statement − The `pass` statement in Python is used when a statement is required syntactically but you do not want any command or code to execute. 56. How can you pick a random item from a list or tuple?Solution- `choice(seq)` − Returns a random item from a list, tuple, or string. 57. How can you pick a random item from a range?Solution- `randrange ([start,] stop [,step])` − returns a randomly selected element from range(start, stop, step). 58. How can you get a random number in python?Solution- `random()` − returns a random float `r`, such that 0 is less than or equal to `r` and `r` is less than 1. 59. How will you set the starting value in generating random numbers?Solution- `seed([x])` − Sets the integer starting value used in generating random numbers. Call this function before calling any other random module function. Returns `None`. 60. How will you randomizes the items of a list in place?Solution- `shuffle(lst)` − Randomizes the items of a list in place. Returns `None`. 61. How will you capitalizes first letter of string?Solution- `capitalize()` − Capitalizes first letter of string. 62. How will you check in a string that all characters are alphanumeric?Solution- `isalnum()` − Returns `True` if string has at least 1 character and all characters are alphanumeric and `False` otherwise. 63. How will you check in a string that all characters are digits?Solution- `isdigit()` − Returns `True` if string contains only digits and `False` otherwise. 64. How will you check in a string that all characters are in lowercase?Solution- `islower()` − Returns `True` if string has at least 1 cased character and all cased characters are in lowercase and `False` otherwise. 65. How will you check in a string that all characters are numerics?Solution- `isnumeric()` − Returns `True` if a unicode string contains only numeric characters and `False` otherwise. 66. How will you check in a string that all characters are whitespaces?Solution- `isspace()` − Returns `True` if string contains only whitespace characters and `False` otherwise. 67. How will you check in a string that it is properly titlecased?Solution- `istitle()` − Returns `True` if string is properly "titlecased" and `False` otherwise. 68. How will you check in a string that all characters are in uppercase?Solution- `isupper()` − Returns `True` if string has at least one cased character and all cased characters are in uppercase and `False` otherwise. 69. How will you merge elements in a sequence?Solution- `join(seq)` − Merges (concatenates) the string representations of elements in sequence `seq` into a string, with separator string. 70. How will you get the length of the string?Solution- `len(string)` − Returns the length of the string. 71. How will you get a space-padded string with the original string left-justified to a total of width columns?Solution- `ljust(width[, fillchar])` − Returns a space-padded string with the original string left-justified to a total of width columns. 72. How will you convert a string to all lowercase?Solution- `lower()` − Converts all uppercase letters in string to lowercase. 73. How will you remove all leading whitespace in string?Solution- `lstrip()` − Removes all leading whitespace in string. 74. How will you get the max alphabetical character from the string?Solution- `max(str)` − Returns the `max` alphabetical character from the string `str`. 75. How will you get the min alphabetical character from the string?Solution- ``min(str)` − Returns the `min` alphabetical character from the string `str`. 76. How will you replaces all occurrences of old substring in string with new string?Solution- `replace(old, new [, max])` − Replaces all occurrences of old in string with new or at most max occurrences if `max` given. 77. How will you remove all leading and trailing whitespace in string?Solution- `strip([chars])` − Performs both `lstrip()` and `rstrip()` on string. 78. How will you change case for all letters in string?Solution- `swapcase()` − Inverts case for all letters in string. 79. How will you get titlecased version of string?Solution- `title()` − Returns "titlecased" version of string, that is, all words begin with uppercase and the rest are lowercase. 80. How will you convert a string to all uppercase?Solution- `upper()` − Converts all lowercase letters in string to uppercase. 81. How will you check in a string that all characters are decimal?Solution- `isdecimal()` − Returns `True` if a unicode string contains only decimal characters and `False` otherwise. 82. What is the difference between `del()` and `remove()` methods of list?Solution- To remove a list element, you can use either the `del` statement if you know exactly which element(s) you are deleting or the `remove()` method if you do not know. 83. What is the output of `len([1, 2, 3])`?Solution- `3` 84. What is the output of `[1, 2, 3] + [4, 5, 6]`?Solution- `[1, 2, 3, 4, 5, 6]` 85. What is the output of `['Hi!'] * 4`?Solution- `['Hi!', 'Hi!', 'Hi!', 'Hi!']` 86. What is the output of 3 in `[1, 2, 3]`?Solution- `True` 87. What is the output of for `x in [1, 2, 3]: print x`?Solution```python123``` 88. What is the output of `L[2]` if `L = [1,2,3]`?Solution- `3`, Offsets start at zero. 89. What is the output of `L[-2]` if `L = [1,2,3]`?Solution- `1`, Negative: count from the right. 90. What is the output of `L[1:]` if `L = [1,2,3]`?Solution- `2, 3`, Slicing fetches sections. 91. How will you compare two lists?Solution- `cmp(list1, list2)` − Compares elements of both lists. 92. How will you get the length of a list?Solution- `len(list)` − Gives the total length of the list. 93. How will you get the max valued item of a list?Solution- `max(list)` − Returns item from the list with max value. 94. How will you get the min valued item of a list?Solution- `min(list)` − Returns item from the list with min value. 95. How will you get the index of an object in a list?Solution- `list.index(obj)` − Returns the lowest index in list that `obj` appears. 96. How will you insert an object at given index in a list?Solution- `list.insert(index, obj)` − Inserts object `obj` into list at offset index. 97. How will you remove last object from a list?Solution`list.pop(obj=list[-1])` − Removes and returns last object or obj from list. 98. How will you remove an object from a list?Solution- `list.remove(obj)` − Removes object `obj` from list. 99. How will you reverse a list?Solution- `list.reverse()` − Reverses objects of list in place. 100. How will you sort a list?Solution- `list.sort([func])` − Sorts objects of list, use compare `func` if given. 101. What is lambda function in python?Solution- `‘lambda’` is a keyword in python which creates an anonymous function. Lambda does not contain block of statements. It does not contain return statements. 102. What we call a function which is incomplete version of a function?Solution- `Stub`. 103. When a function is defined then the system stores parameters and local variables in an area of memory. What this memory is known as?Solution- `Stack`. 104. A canvas can have a foreground color? (Yes/No)Solution- `Yes`. 105. Is Python platform independent?Solution- No. There are some modules and functions in python that can only run on certain platforms. 106. Do you think Python has a complier?Solution- Yes. Python complier which works automatically so we don’t notice the compiler of python. 107. What are the applications of Python?Solution1. Django (Web framework of Python).2. Micro Frame work such as Flask and Bottle.3. Plone and Django CMS for advanced content Management. 108. What is the basic difference between Python ver 2 and Python ver 3?Solution- Table below explains the difference between Python version 2 and Python version 3.| S.No | Section | Python Version 2 | Python Version 3 | |:-------|:---------------| :------ |:--------|| 1. | Print Function | Print command can be used without parentheses. | Python 3 needs parentheses to print any string. It will raise error without parentheses. | | 2. | Unicode | ASCII str() types and separate Unicode() but there is no byte type code in Python 2. | Unicode (utf-8) and it has two byte classes − Byte, Bytearray S. || 3. | Exceptions | Python 2 accepts both new and old notations of syntax. | Python 3 raises a SyntaxError in turn when we don’t enclose the exception argument in parentheses. || 4. | Comparing Unorderable | It does not raise any error. | It raises ‘TypeError’ as warning if we try to compare unorderable types. | 109. Which programming Language is an implementation of Python programming language designed to run on Java Platform?Solution- `Jython`. (Jython is successor of Jpython.) 110. Is there any double data type in Python?Solution- `No`. 111. Is String in Python are immutable? (Yes/No)Solution- `Yes`. 112. Can `True = False` be possible in Python?Solution- `No`. 113. Which module of python is used to apply the methods related to OS.?Solution- `OS`. 114. When does a new block begin in python?Solution- A block begins when the line is intended by 4 spaces. 115. Write a function in python which detects whether the given two strings are anagrams or not.Solution
###Code
def check(a,b):
if(len(a)!=len(b)):
return False
else:
if(sorted(list(a)) == sorted(list(b))):
return True
else:
return False
###Output
_____no_output_____
###Markdown
116. Name the python Library used for Machine learning.Solution- Scikit-learn python Library used for Machine learning 117. What does `pass` operation do?Solution- `pass` indicates that nothing is to be done i.e., it signifies a no operation. 118. Name the tools which python uses to find bugs (if any).Solution- `Pylint` and `pychecker`. 119. Write a function to give the sum of all the numbers in list?SolutionSample list − (100, 200, 300, 400, 0, 500)Expected output − 1500
###Code
# Program for sum of all the numbers in list is −
def sum(numbers):
total = 0
for num in numbers:
total+=num
print("Sum of the numbers: ", total)
sum((100, 200, 300, 400, 0, 500))
# We define a function ‘sum’ with numbers as parameter.
#The in for loop we store the sum of all the values of list.
###Output
Sum of the numbers: 1500
###Markdown
120. Write a program in Python to reverse a string without using inbuilt function reverse string?Solution
###Code
# Reverse a string without using reverse() function
def string_reverse(string):
i = len(string) - 1
print ("The length of string is: ", len(string))
sNew = ''
while i >= 0:
sNew = sNew + str(string[i])
i = i -1
return sNew
print(string_reverse("1tniop"))
# First we declare a variable to store the reverse string.
# Then using while loop and indexing of string (index is calculated by string length)
# we reverse the string. While loop starts when index is greater than zero.
# Index is reduced to value 1 each time. When index reaches zero we obtain the reverse of string.
###Output
The length of string is: 6
point1
###Markdown
121. Write a program to test whether the number is in the defined range or not?Solution
###Code
# Program is −
def test_range(num):
if num in range(0, 101):
print("%s is in range"%str(num))
else:
print("%s is not in range"%str(num))
# print("The number is outside the given range.")
test_range(99)
# To test any number in a particular range we make use of the method ‘if..in’ and else condition.
###Output
99 is in range
###Markdown
122. Write a program to calculate number of upper case letters and number of lower case letters?SolutionTest on String: 'The quick Brown Fox'
###Code
# Program is −
def string_test(s):
d={"UPPER_CASE":0, "LOWER_CASE":0}
for c in s:
if c.isupper():
d["UPPER_CASE"]+=1
elif c.islower():
d["LOWER_CASE"]+=1
else:
pass
print ("String in testing is: ", s)
print ("Number of Lower Case characters in String: ", d["UPPER_CASE"])
print ("Number of Upper Case characters in String: ", d["LOWER_CASE"])
string_test('The quick Brown Fox')
# We make use of the methods .isupper() and .islower(). We initialise the count for lower and upper.
# Using if and else condition we calculate total number of lower and upper case characters.
###Output
String in testing is: The quick Brown Fox
Number of Lower Case characters in String: 3
Number of Upper Case characters in String: 13
|
notebooks/bigquery/solutions/b_bqml.ipynb | ###Markdown
Big Query Machine Learning (BQML)**Learning Objectives**- Understand that it is possible to build ML models in Big Query- Understand when this is appropriate- Experience building a model using BQML IntroductionBigQuery is more than just a data warehouse, it also has some ML capabilities baked into it. As of January 2019 it is limited to linear models, but what it gives up in complexity, it gains in ease of use.BQML is a great option when a linear model will suffice, or when you want a quick benchmark to beat, but for more complex models such as neural networks you will need to pull the data out of BigQuery and into an ML Framework like TensorFlow.In this notebook, we will build a naive model using BQML. **This notebook is intended to inspire usage of BQML, we will not focus on model performance.** Set up environment variables and load necessary libraries
###Code
PROJECT = "cloud-training-demos" # Replace with your PROJECT
REGION = "us-central1" # Choose an available region for Cloud MLE
import os
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
!pip freeze | grep google-cloud-bigquery==1.21.0 || pip install google-cloud-bigquery==1.21.0
%load_ext google.cloud.bigquery
###Output
_____no_output_____
###Markdown
Create BigQuery datasetPrior to now we've just been reading an existing BigQuery table, now we're going to create our own so so we need some place to put it. In BigQuery parlance, `Dataset` means a folder for tables. We will take advantage of BigQuery's [Python Client](https://cloud.google.com/bigquery/docs/reference/librariesclient-libraries-install-python) to create the dataset.
###Code
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
dataset = bigquery.Dataset(bq.dataset("bqml_taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created")
except:
print("Dataset already exists")
###Output
_____no_output_____
###Markdown
Create modelTo create a model ([documentation](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create))1. Use `CREATE MODEL` and provide a destination table for resulting model. Alternatively we can use `CREATE OR REPLACE MODEL` which allows overwriting an existing model.2. Use `OPTIONS` to specify the model type (linear_reg or logistic_reg). There are many more options [we could specify](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-createmodel_option_list), such as regularization and learning rate, but we'll accept the defaults.3. Provide the query which fetches the training data Have a look at [Step Two of this tutorial](https://cloud.google.com/bigquery/docs/bigqueryml-natality) to see another example.**The query will take about two minutes to complete**
###Code
%%bigquery --project $PROJECT
CREATE or REPLACE MODEL bqml_taxifare.taxifare_model
OPTIONS(model_type = "linear_reg",
input_label_cols = ["label"]) AS
-- query to fetch training data
SELECT
(tolls_amount + fare_amount) AS label,
pickup_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude
FROM
`nyc-tlc.yellow.trips`
WHERE
-- Clean Data
trip_distance > 0
AND passenger_count > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
-- repeatable 1/5000th sample
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 5000)) = 1
###Output
_____no_output_____
###Markdown
Get training statisticsBecause the query uses a `CREATE MODEL` statement to create a table, you do not see query results. The output is an empty string.To get the training results we use the [`ML.TRAINING_INFO`](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-train) function.Have a look at [Step Three and Four of this tutorial](https://cloud.google.com/bigquery/docs/bigqueryml-natality) to see a similar example.
###Code
%%bigquery --project $PROJECT
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `bqml_taxifare.taxifare_model`)
###Output
_____no_output_____
###Markdown
'eval_loss' is reported as mean squared error, so our RMSE is **8.29**. Your results may vary. PredictTo use our model to make predictions, we use `ML.PREDICT`. Let's, use the `taxifare_model` you trained above to infer the cost of a taxi ride that occurs at 10:00 am on January 3rd, 2014 going from the Google Office in New York (latitude: 40.7434, longitude: -74.0080) to the JFK airport (latitude: 40.6413, longitude: -73.7781)Have a look at [Step Five of this tutorial](https://cloud.google.com/bigquery/docs/bigqueryml-natality) to see another example.
###Code
%%bigquery --project $PROJECT
#standardSQL
SELECT
predicted_label
FROM
ML.PREDICT(MODEL `bqml_taxifare.taxifare_model`,
(
SELECT
TIMESTAMP "2014-01-03 10:00:00" as pickup_datetime,
-74.0080 as pickup_longitude,
40.7434 as pickup_latitude,
-73.7781 as dropoff_longitude,
40.6413 as dropoff_latitude
))
###Output
_____no_output_____
###Markdown
Big Query Machine Learning (BQML)**Learning Objectives**- Understand that it is possible to build ML models in Big Query- Understand when this is appropriate- Experience building a model using BQML IntroductionBigQuery is more than just a data warehouse, it also has some ML capabilities baked into it. BQML is a great option when a linear model will suffice, or when you want a quick benchmark to beat, but for more complex models such as neural networks you will need to pull the data out of BigQuery and into an ML Framework like TensorFlow.In this notebook, we will build a naive model using BQML. **This notebook is intended to inspire usage of BQML, we will not focus on model performance.** Set up environment variables and load necessary libraries
###Code
from google import api_core
from google.cloud import bigquery
PROJECT = !gcloud config get-value project
PROJECT = PROJECT[0]
%env PROJECT=$PROJECT
###Output
_____no_output_____
###Markdown
Create BigQuery datasetPrior to now we've just been reading an existing BigQuery table, now we're going to create our own so so we need some place to put it. In BigQuery parlance, `Dataset` means a folder for tables. We will take advantage of BigQuery's [Python Client](https://cloud.google.com/bigquery/docs/reference/librariesclient-libraries-install-python) to create the dataset.
###Code
bq = bigquery.Client(project=PROJECT)
dataset = bigquery.Dataset(bq.dataset("bqml_taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created")
except api_core.exceptions.Conflict:
print("Dataset already exists")
###Output
_____no_output_____
###Markdown
Create modelTo create a model ([documentation](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create))1. Use `CREATE MODEL` and provide a destination table for resulting model. Alternatively we can use `CREATE OR REPLACE MODEL` which allows overwriting an existing model.2. Use `OPTIONS` to specify the model type (linear_reg or logistic_reg). There are many more options [we could specify](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-createmodel_option_list), such as regularization and learning rate, but we'll accept the defaults.3. Provide the query which fetches the training data Have a look at [Step Two of this tutorial](https://cloud.google.com/bigquery/docs/bigqueryml-natality) to see another example.**The query will take about two minutes to complete**
###Code
%%bigquery --project $PROJECT
CREATE or REPLACE MODEL bqml_taxifare.taxifare_model
OPTIONS(model_type = "linear_reg",
input_label_cols = ["label"]) AS
-- query to fetch training data
SELECT
(tolls_amount + fare_amount) AS label,
pickup_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude
FROM
`nyc-tlc.yellow.trips`
WHERE
-- Clean Data
trip_distance > 0
AND passenger_count > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
-- repeatable 1/5000th sample
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 5000)) = 1
###Output
_____no_output_____
###Markdown
Get training statisticsBecause the query uses a `CREATE MODEL` statement to create a table, you do not see query results. The output is an empty string.To get the training results we use the [`ML.TRAINING_INFO`](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-train) function.Have a look at [Step Three and Four of this tutorial](https://cloud.google.com/bigquery/docs/bigqueryml-natality) to see a similar example.
###Code
%%bigquery --project $PROJECT
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `bqml_taxifare.taxifare_model`)
###Output
_____no_output_____
###Markdown
'eval_loss' is reported as mean squared error, so our RMSE is **8.29**. Your results may vary. PredictTo use our model to make predictions, we use `ML.PREDICT`. Let's, use the `taxifare_model` you trained above to infer the cost of a taxi ride that occurs at 10:00 am on January 3rd, 2014 going from the Google Office in New York (latitude: 40.7434, longitude: -74.0080) to the JFK airport (latitude: 40.6413, longitude: -73.7781)Have a look at [Step Five of this tutorial](https://cloud.google.com/bigquery/docs/bigqueryml-natality) to see another example.
###Code
%%bigquery --project $PROJECT
#standardSQL
SELECT
predicted_label
FROM
ML.PREDICT(MODEL `bqml_taxifare.taxifare_model`,
(
SELECT
TIMESTAMP "2014-01-03 10:00:00" as pickup_datetime,
-74.0080 as pickup_longitude,
40.7434 as pickup_latitude,
-73.7781 as dropoff_longitude,
40.6413 as dropoff_latitude
))
###Output
_____no_output_____
###Markdown
Big Query Machine Learning (BQML)**Learning Objectives**- Understand that it is possible to build ML models in Big Query- Understand when this is appropriate- Experience building a model using BQML IntroductionBigQuery is more than just a data warehouse, it also has some ML capabilities baked into it. As of January 2019 it is limited to linear models, but what it gives up in complexity, it gains in ease of use.BQML is a great option when a linear model will suffice, or when you want a quick benchmark to beat, but for more complex models such as neural networks you will need to pull the data out of BigQuery and into an ML Framework like TensorFlow.In this notebook, we will build a naive model using BQML. **This notebook is intended to inspire usage of BQML, we will not focus on model performance.** Set up environment variables and load necessary libraries
###Code
from google import api_core
from google.cloud import bigquery
PROJECT = !gcloud config get-value project
PROJECT = PROJECT[0]
%env PROJECT=$PROJECT
###Output
_____no_output_____
###Markdown
Create BigQuery datasetPrior to now we've just been reading an existing BigQuery table, now we're going to create our own so so we need some place to put it. In BigQuery parlance, `Dataset` means a folder for tables. We will take advantage of BigQuery's [Python Client](https://cloud.google.com/bigquery/docs/reference/librariesclient-libraries-install-python) to create the dataset.
###Code
bq = bigquery.Client(project=PROJECT)
dataset = bigquery.Dataset(bq.dataset("bqml_taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created")
except api_core.exceptions.Conflict:
print("Dataset already exists")
###Output
_____no_output_____
###Markdown
Create modelTo create a model ([documentation](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create))1. Use `CREATE MODEL` and provide a destination table for resulting model. Alternatively we can use `CREATE OR REPLACE MODEL` which allows overwriting an existing model.2. Use `OPTIONS` to specify the model type (linear_reg or logistic_reg). There are many more options [we could specify](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-createmodel_option_list), such as regularization and learning rate, but we'll accept the defaults.3. Provide the query which fetches the training data Have a look at [Step Two of this tutorial](https://cloud.google.com/bigquery/docs/bigqueryml-natality) to see another example.**The query will take about two minutes to complete**
###Code
%%bigquery --project $PROJECT
CREATE or REPLACE MODEL bqml_taxifare.taxifare_model
OPTIONS(model_type = "linear_reg",
input_label_cols = ["label"]) AS
-- query to fetch training data
SELECT
(tolls_amount + fare_amount) AS label,
pickup_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude
FROM
`nyc-tlc.yellow.trips`
WHERE
-- Clean Data
trip_distance > 0
AND passenger_count > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
-- repeatable 1/5000th sample
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 5000)) = 1
###Output
_____no_output_____
###Markdown
Get training statisticsBecause the query uses a `CREATE MODEL` statement to create a table, you do not see query results. The output is an empty string.To get the training results we use the [`ML.TRAINING_INFO`](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-train) function.Have a look at [Step Three and Four of this tutorial](https://cloud.google.com/bigquery/docs/bigqueryml-natality) to see a similar example.
###Code
%%bigquery --project $PROJECT
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `bqml_taxifare.taxifare_model`)
###Output
_____no_output_____
###Markdown
'eval_loss' is reported as mean squared error, so our RMSE is **8.29**. Your results may vary. PredictTo use our model to make predictions, we use `ML.PREDICT`. Let's, use the `taxifare_model` you trained above to infer the cost of a taxi ride that occurs at 10:00 am on January 3rd, 2014 going from the Google Office in New York (latitude: 40.7434, longitude: -74.0080) to the JFK airport (latitude: 40.6413, longitude: -73.7781)Have a look at [Step Five of this tutorial](https://cloud.google.com/bigquery/docs/bigqueryml-natality) to see another example.
###Code
%%bigquery --project $PROJECT
#standardSQL
SELECT
predicted_label
FROM
ML.PREDICT(MODEL `bqml_taxifare.taxifare_model`,
(
SELECT
TIMESTAMP "2014-01-03 10:00:00" as pickup_datetime,
-74.0080 as pickup_longitude,
40.7434 as pickup_latitude,
-73.7781 as dropoff_longitude,
40.6413 as dropoff_latitude
))
###Output
_____no_output_____ |
crestdsl-intro.ipynb | ###Markdown
Welcome! This notebook presents some of the code that is described in Chapter 5 of Stefan Klikovits's thesis titled "A Domain-Specific Language Approach to Hybrid CPS Modelling".You can execute the code cells by clicking the little play button above.
###Code
# Import of the CREST library
import crestdsl
# A specific subpackage (e.g. the model)
# can be imported as follows
import crestdsl.model as crest
###Output
_____no_output_____
###Markdown
Most basic resources and entity definition
###Code
# use CREST's domain types to specify the domain
watt = crest.Resource(unit="Watt", domain=crest.REAL)
lumen = crest.Resource(unit="Lumen", domain=crest.INTEGER)
my_lamp = crest.Entity()
my_lamp.in_port = crest.Input(resource=watt, value=100)
my_lamp.out_port = crest.Output(lumen, 0)
my_lamp.on = crest.State()
my_lamp.off = crest.State()
my_lamp.current = my_lamp.off
###Output
_____no_output_____
###Markdown
A very basic entity type
###Code
class MyLamp(crest.Entity):
in_port = crest.Input(resource=watt, value=100)
out_port = crest.Output(watt, 0)
on = crest.State()
off = crest.State()
current = off
my_new_lamp = MyLamp()
my_other_lamp = MyLamp()
###Output
_____no_output_____
###Markdown
An entity with dynamic behaviour
###Code
class DynamicLamp(crest.Entity):
in_port = crest.Input(resource=watt, value=100)
out_port = crest.Output(watt, 0)
on = crest.State()
off = crest.State()
current = off
off_to_on = crest.Transition(source=off, target=on, \
guard=(lambda self: self.in_port.value >= 100))
@crest.transition(source=on, target=off)
def on_to_off(self):
return self.in_port.value < 100
# output = 90 watt output * 15 lumen per watt
output_when_on = crest.Update(state=on, target=out_port, \
function=(lambda self, dt: 90 * 15))
@crest.update(state=off, target=out_port)
def output_when_off(self, dt):
return 0
###Output
_____no_output_____
###Markdown
Subclassing as form of extension and specialisation
###Code
switch = crest.Resource(unit="lampSwitch", domain=["on", "off"])
class SwitchLamp(DynamicLamp):
switch_input = crest.Input(resource=switch, value="off")
@crest.transition(source="on", target="off")
def switch_off(self): # extend DynamicLamp functionality
return self.switch_input.value == "off"
@crest.transition(source="off", target="on")
def off_to_on(self): # override DynamicLamp functionality
return self.in_port.value >= 100 and \
self.switch_input.value == "on"
###Output
_____no_output_____
###Markdown
Parameterisable entities (with `__init__` constructors)
###Code
factor = crest.Resource(unit="efficiency", domain=crest.REAL)
class GenericLamp(DynamicLamp):
threshold = crest.Local(watt, value=100) # default value
efficiency = crest.Local(factor, 0.75)
def __init__(self, threshold, efficiency=0.75):
# constructor: one mandatory + one optional parameter
self.threshold.value = threshold
self.efficiency.value = efficiency
@crest.transition(source="off", target="on")
def off_to_on(self):
return self.in_port.value >= self.threshold.value
@crest.transition(source="on", target="off")
def on_to_off(self):
return self.in_port.value < self.threshold.value
@crest.update(state="on", target="out_port")
def output_when_on(self, dt):
return self.in_port.value * self.efficiency.value * 15
powerful_lamp = GenericLamp(300)
efficient_lamp = GenericLamp(50, .97)
###Output
_____no_output_____
###Markdown
Compose entities to a system (with subentities)
###Code
class LampComposition(crest.Entity):
# inputs
switch_input = crest.Input(resource=switch, value="off")
in_port = crest.Input(resource=watt, value=100)
# subentities
big_lamp = GenericLamp(300)
small_lamp = GenericLamp(100, .9)
# outputs
big_out = crest.Output(watt, 0)
small_out = crest.Output(watt, 0)
# states
on = crest.State()
off = crest.State()
current = off
# transitions
@crest.transition(source=off, target=on)
def off_to_on(self):
return self.switch_input.value == "on"
@crest.transition(source=on, target=off)
def on_to_off(self):
return self.switch_input.value != "on"
# setting of subentity inputs
@crest.update(state=on, target=small_lamp.in_port)
def set_small_lamp_input_when_on(self, dt):
if self.in_port.value > 100:
return 100
else:
return 0
@crest.update(state=off, target=small_lamp.in_port)
def set_small_lamp_input_when_off(self, dt):
return 0
@crest.update(state=on, target=big_lamp.in_port)
def set_big_lamp_input_when_on(self, dt):
if self.in_port.value < 100:
return 0
else:
return self.in_port.value - 100
@crest.update(state=off, target=big_lamp.in_port)
def set_big_lamp_input_when_off(self, dt):
return 0
# connect subentity output to entity output
@crest.influence(source=big_lamp.out_port, target=big_out)
def forward_big_output(value):
# influences only take one parameter: value
return value
forward_small_output = crest.Influence(
source=small_lamp.out_port, target=small_out)
composition = LampComposition()
###Output
_____no_output_____
###Markdown
Validator
###Code
from crestdsl.model import SystemCheck
# create instance and SystemCheck object
gl = GenericLamp(300, .85)
SystemCheck(gl).check_all() # returns True
gl.current = None # point current state to None
# write to error log: [...] Entity has no current state
SystemCheck(gl).check_all() # returns False
# write to error log: [...] Entity has no current state
SystemCheck(gl).check_all(exit_on_error=True) # AssertionError
###Output
ERROR:root:Problem in check 'check_current_states': Entity has no current state
ERROR:root:Problem in check 'check_current_states': Entity has no current state
|
Part 3 - Classification/Section 16 - Support Vector Machine (SVM)/Python/support_vector_machine.ipynb | ###Markdown
Support Vector Machine (SVM) Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
print(X_train)
print(y_train)
print(X_test)
print(y_test)
###Output
[0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0
0 0 1 0 0 0 0 1 0 0 1 0 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 0 1 0 0 0 0 1 0 0 1
0 0 0 0 1 1 1 0 0 0 1 1 0 1 1 0 0 1 0 0 0 1 0 1 1 1]
###Markdown
Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
print(X_train)
print(X_test)
###Output
[[-0.80480212 0.50496393]
[-0.01254409 -0.5677824 ]
[-0.30964085 0.1570462 ]
[-0.80480212 0.27301877]
[-0.30964085 -0.5677824 ]
[-1.10189888 -1.43757673]
[-0.70576986 -1.58254245]
[-0.21060859 2.15757314]
[-1.99318916 -0.04590581]
[ 0.8787462 -0.77073441]
[-0.80480212 -0.59677555]
[-1.00286662 -0.42281668]
[-0.11157634 -0.42281668]
[ 0.08648817 0.21503249]
[-1.79512465 0.47597078]
[-0.60673761 1.37475825]
[-0.11157634 0.21503249]
[-1.89415691 0.44697764]
[ 1.67100423 1.75166912]
[-0.30964085 -1.37959044]
[-0.30964085 -0.65476184]
[ 0.8787462 2.15757314]
[ 0.28455268 -0.53878926]
[ 0.8787462 1.02684052]
[-1.49802789 -1.20563157]
[ 1.07681071 2.07059371]
[-1.00286662 0.50496393]
[-0.90383437 0.30201192]
[-0.11157634 -0.21986468]
[-0.60673761 0.47597078]
[-1.6960924 0.53395707]
[-0.11157634 0.27301877]
[ 1.86906873 -0.27785096]
[-0.11157634 -0.48080297]
[-1.39899564 -0.33583725]
[-1.99318916 -0.50979612]
[-1.59706014 0.33100506]
[-0.4086731 -0.77073441]
[-0.70576986 -1.03167271]
[ 1.07681071 -0.97368642]
[-1.10189888 0.53395707]
[ 0.28455268 -0.50979612]
[-1.10189888 0.41798449]
[-0.30964085 -1.43757673]
[ 0.48261718 1.22979253]
[-1.10189888 -0.33583725]
[-0.11157634 0.30201192]
[ 1.37390747 0.59194336]
[-1.20093113 -1.14764529]
[ 1.07681071 0.47597078]
[ 1.86906873 1.51972397]
[-0.4086731 -1.29261101]
[-0.30964085 -0.3648304 ]
[-0.4086731 1.31677196]
[ 2.06713324 0.53395707]
[ 0.68068169 -1.089659 ]
[-0.90383437 0.38899135]
[-1.20093113 0.30201192]
[ 1.07681071 -1.20563157]
[-1.49802789 -1.43757673]
[-0.60673761 -1.49556302]
[ 2.1661655 -0.79972756]
[-1.89415691 0.18603934]
[-0.21060859 0.85288166]
[-1.89415691 -1.26361786]
[ 2.1661655 0.38899135]
[-1.39899564 0.56295021]
[-1.10189888 -0.33583725]
[ 0.18552042 -0.65476184]
[ 0.38358493 0.01208048]
[-0.60673761 2.331532 ]
[-0.30964085 0.21503249]
[-1.59706014 -0.19087153]
[ 0.68068169 -1.37959044]
[-1.10189888 0.56295021]
[-1.99318916 0.35999821]
[ 0.38358493 0.27301877]
[ 0.18552042 -0.27785096]
[ 1.47293972 -1.03167271]
[ 0.8787462 1.08482681]
[ 1.96810099 2.15757314]
[ 2.06713324 0.38899135]
[-1.39899564 -0.42281668]
[-1.20093113 -1.00267957]
[ 1.96810099 -0.91570013]
[ 0.38358493 0.30201192]
[ 0.18552042 0.1570462 ]
[ 2.06713324 1.75166912]
[ 0.77971394 -0.8287207 ]
[ 0.28455268 -0.27785096]
[ 0.38358493 -0.16187839]
[-0.11157634 2.21555943]
[-1.49802789 -0.62576869]
[-1.29996338 -1.06066585]
[-1.39899564 0.41798449]
[-1.10189888 0.76590222]
[-1.49802789 -0.19087153]
[ 0.97777845 -1.06066585]
[ 0.97777845 0.59194336]
[ 0.38358493 0.99784738]]
###Markdown
Training the SVM model on the Training set
###Code
from sklearn.svm import SVC
classifier = SVC(kernel = 'linear', random_state = 0)
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting a new result
###Code
print(classifier.predict(sc.transform([[30,87000]])))
###Output
[0]
###Markdown
Predicting the Test set results
###Code
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
[[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 1]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[0 0]
[1 1]
[0 1]
[0 0]
[0 0]
[0 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 1]
[0 0]
[0 0]
[1 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 0]
[0 0]
[0 0]
[1 1]
[1 1]
[0 0]
[1 1]
[0 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[0 1]
[1 1]
[1 1]]
###Markdown
Making the Confusion Matrix
###Code
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
###Output
[[66 2]
[ 8 24]]
###Markdown
Visualising the Training set results
###Code
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_train), y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('SVM (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
###Markdown
Visualising the Test set results
###Code
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_test), y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('SVM (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
###Markdown
Support Vector Machine (SVM)Máquinas de vectores soporte Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
print(X_train)
print(y_train)
print(X_test)
print(y_test)
###Output
[0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0
0 0 1 0 0 0 0 1 0 0 1 0 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 0 1 0 0 0 0 1 0 0 1
0 0 0 0 1 1 1 0 0 0 1 1 0 1 1 0 0 1 0 0 0 1 0 1 1 1]
###Markdown
Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
print(X_train)
print(X_test)
###Output
[[-0.80480212 0.50496393]
[-0.01254409 -0.5677824 ]
[-0.30964085 0.1570462 ]
[-0.80480212 0.27301877]
[-0.30964085 -0.5677824 ]
[-1.10189888 -1.43757673]
[-0.70576986 -1.58254245]
[-0.21060859 2.15757314]
[-1.99318916 -0.04590581]
[ 0.8787462 -0.77073441]
[-0.80480212 -0.59677555]
[-1.00286662 -0.42281668]
[-0.11157634 -0.42281668]
[ 0.08648817 0.21503249]
[-1.79512465 0.47597078]
[-0.60673761 1.37475825]
[-0.11157634 0.21503249]
[-1.89415691 0.44697764]
[ 1.67100423 1.75166912]
[-0.30964085 -1.37959044]
[-0.30964085 -0.65476184]
[ 0.8787462 2.15757314]
[ 0.28455268 -0.53878926]
[ 0.8787462 1.02684052]
[-1.49802789 -1.20563157]
[ 1.07681071 2.07059371]
[-1.00286662 0.50496393]
[-0.90383437 0.30201192]
[-0.11157634 -0.21986468]
[-0.60673761 0.47597078]
[-1.6960924 0.53395707]
[-0.11157634 0.27301877]
[ 1.86906873 -0.27785096]
[-0.11157634 -0.48080297]
[-1.39899564 -0.33583725]
[-1.99318916 -0.50979612]
[-1.59706014 0.33100506]
[-0.4086731 -0.77073441]
[-0.70576986 -1.03167271]
[ 1.07681071 -0.97368642]
[-1.10189888 0.53395707]
[ 0.28455268 -0.50979612]
[-1.10189888 0.41798449]
[-0.30964085 -1.43757673]
[ 0.48261718 1.22979253]
[-1.10189888 -0.33583725]
[-0.11157634 0.30201192]
[ 1.37390747 0.59194336]
[-1.20093113 -1.14764529]
[ 1.07681071 0.47597078]
[ 1.86906873 1.51972397]
[-0.4086731 -1.29261101]
[-0.30964085 -0.3648304 ]
[-0.4086731 1.31677196]
[ 2.06713324 0.53395707]
[ 0.68068169 -1.089659 ]
[-0.90383437 0.38899135]
[-1.20093113 0.30201192]
[ 1.07681071 -1.20563157]
[-1.49802789 -1.43757673]
[-0.60673761 -1.49556302]
[ 2.1661655 -0.79972756]
[-1.89415691 0.18603934]
[-0.21060859 0.85288166]
[-1.89415691 -1.26361786]
[ 2.1661655 0.38899135]
[-1.39899564 0.56295021]
[-1.10189888 -0.33583725]
[ 0.18552042 -0.65476184]
[ 0.38358493 0.01208048]
[-0.60673761 2.331532 ]
[-0.30964085 0.21503249]
[-1.59706014 -0.19087153]
[ 0.68068169 -1.37959044]
[-1.10189888 0.56295021]
[-1.99318916 0.35999821]
[ 0.38358493 0.27301877]
[ 0.18552042 -0.27785096]
[ 1.47293972 -1.03167271]
[ 0.8787462 1.08482681]
[ 1.96810099 2.15757314]
[ 2.06713324 0.38899135]
[-1.39899564 -0.42281668]
[-1.20093113 -1.00267957]
[ 1.96810099 -0.91570013]
[ 0.38358493 0.30201192]
[ 0.18552042 0.1570462 ]
[ 2.06713324 1.75166912]
[ 0.77971394 -0.8287207 ]
[ 0.28455268 -0.27785096]
[ 0.38358493 -0.16187839]
[-0.11157634 2.21555943]
[-1.49802789 -0.62576869]
[-1.29996338 -1.06066585]
[-1.39899564 0.41798449]
[-1.10189888 0.76590222]
[-1.49802789 -0.19087153]
[ 0.97777845 -1.06066585]
[ 0.97777845 0.59194336]
[ 0.38358493 0.99784738]]
###Markdown
Training the SVM model on the Training setEntrenamiento del modelo SVM en el conjunto de entrenamiento
###Code
# clase de sklearn.svm.SVC
from sklearn.svm import SVC
classifier = SVC(kernel = 'linear', random_state = 0)# creamos el objeto svc con karnel lineal
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting a new result
###Code
print(classifier.predict(sc.transform([[30,87000]])))
###Output
[0]
###Markdown
Predicting the Test set results
###Code
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
[[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 1]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[0 0]
[1 1]
[0 1]
[0 0]
[0 0]
[0 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 1]
[0 0]
[0 0]
[1 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 0]
[0 0]
[0 0]
[1 1]
[1 1]
[0 0]
[1 1]
[0 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[0 1]
[1 1]
[1 1]]
###Markdown
Making the Confusion Matrix
###Code
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
###Output
[[66 2]
[ 8 24]]
###Markdown
Visualising the Training set results
###Code
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_train), y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('SVM (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
###Markdown
Visualising the Test set results
###Code
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_test), y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('SVM (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
|
xarm_env/hw4.ipynb | ###Markdown
Advanced Python for Data Science DS-GA-3001.001/.002 Homework Assignment 04 Due date: 03/02/2020, 10:00AM Student's Name: Student's e-mail: Problem 1 (100 points)The task is to optimize your solutions by using "line_profiler". Your submission "2020_spring_sol04_yourid.ipynb" will contain:- the first part is your original solution (a solution that you originally wrote); - the second part is your final, optimized solution after using line_profiler; - both of which will include the line_profiler results, and your detailed comments.The problem is to simulate a random motion of $n$ objects over a discrete time. Concretely, there is:- a unit square $[0,1]^2$, - $n$ points within the unit square, - and the time is discrete $t=0, 1, 2, \dots$. At time $t=0$, the positions of $n$ points are randomly and uniformly distributed within the unit square; call these positions $\{p_0, p_1, p_2,\dots, p_{n-1}\}$. At every time step $t \geq 0$, every point $i$, chooses to randomly move in one of four directions: left, right, up, down. The distance is also random uniform number on $[0, \delta]$, where $\delta$ is given. That is, at every time step $t$ and for every $i$ we generate a random move as: $$ p_i := p_i + r_i \cdot u_i$$where $$ r_i \sim uniform[0, \delta],$$ and $u_i$ represents a random direction, i.e. a randomly chosen vector among $(-1, 0), (1, 0), (0, -1), (0, 1)$.**Dynamics**Now, one would like to examine and plot the diagram of the minimum distance $d_{\min}$ among these $n$ points over $T$ iterations.The task is to complete the rest of this notebook, where definitions of the functions main_orignal and main_optimized are given below.
###Code
import numpy as np
import random
import matplotlib.pyplot as plt
%load_ext line_profiler
###Output
_____no_output_____
###Markdown
--- The original code description: ** TO BE POPULATED **** EXPLAIN THE SOLUTION ** ---
###Code
def main_original(n, delta, T):
"""
n: is the number of uniformly at random generated points in the unit square
delta: a maximal move of a point in one of four random directions: left, right, up, or down
T: number of iterations
return:
lst_of_min_distances: of the minimum distances among all n points over times: t=0, 1, 2, \dots, T - 1,
it is a list of reals of length T"""
return
n = 1500
delta = 1.0 / n
T = 20
%lprun -f main_original lst_min_dist = main_original(n, delta, T)
# plot the diagram of the minimum distances:
# where we rescale distance with by factor $\sqrt{n}$:
print("len:", len(lst_min_dist))
plt.plot(range(T), np.array(lst_min_dist) * np.sqrt(n))
plt.show()
###Output
_____no_output_____
###Markdown
--- The optimized code description: ** TO BE POPULATED **** EXPLAIN THE SOLUTION ** ---
###Code
def main_optimized(n, delta, T):
"""
n: is the number of uniformly at random generated points in the unit square
delta: a maximal move of a point in one of four random directions: left, right, up, or down
T: number of iterations
return:
lst_of_min_distances: of the minimum distances among all n points over times: t=0, 1, 2, \dots, T - 1,
it is a list of reals of length T"""
return
n = 1000
delta = 1.0 / n
T = 50
%lprun -f main_optimized lst_min_dist = main_optimized(n, delta, T)
# plot the diagram of the minimum distances:
# where we rescale distance by a factor $\sqrt{n}$:
print("len:", len(lst_min_dist))
plt.plot(range(T), np.array(lst_min_dist) / np.sqrt(n))
plt.show()
###Output
_____no_output_____ |
Task 4/Task 4.ipynb | ###Markdown
Task 4 - Movie Recommendation System - IMDB 5000 Movie Data set Submitted by - Shivank Udayawal Problem Statement :* What factors are important that make a Movie more Successful than others. So, we would like to analyze what kind of movies are more successful, in other words, get higher IMDB score.* Building a Recommendation System About Dataset :* Dataset contains 28 variables for 5043 Movies, spanning across 100 years in 66 Countries. * There are 2399 unique Director Names & Thousands of actors/actresses. * “imdb_score” is the Response Variable while the other 27 variables are Possible Predictors. Variable Name & Description 1. color: Film colorization. -> ‘Black and White’ or ‘Color’ 2. director_name: Name of the Director of the Movie 3. num_critic_for_reviews: Number of critical reviews on imdb 4. duration: Duration in minutes 5. director_facebook_likes: Number of likes of the Director on his Facebook Page 6. actor_3_facebook_likes: Number of likes of the Actor_3 on his/her Facebook Page 7. actor_2_name: Other actor starring in the movie 8. actor_1_facebook_likes: Number of likes of the Actor_1 on his/her Facebook Page 9. gross: Gross earnings of the movie in Dollars 10. genres: Film categorization like ‘Animation’, ‘Comedy’, ‘Romance’, ‘Horror’, ‘Sci-Fi’, ‘Action’, ‘Family’ 11. actor_1_name: Primary actor starring in the movie 12. movie_title: Title of the Movie 13. num_voted_users: Number of people who voted for the movie 14. cast_total_facebook_likes: Total number of facebook likes of the entire cast of the movie 15. actor_3_name: Other actor starring in the movie 16. facenumber_in_poster: Number of the actor who featured in the movie poster 17. plot_keywords: Keywords describing the movie plot 18. movie_imdb_link: IMDB link of the movie 19. num_user_for_reviews: Number of users who gave a review 20. language: English, Arabic, Chinese, French, German, Danish, Italian, Japanese etc 21. country: Country where the movie is produced 22. content_rating: Content rating of the movie 23. budget: Budget of the movie in Dollars 24. title_year: The year in which the movie is released (1916:2016) 25. actor_2_facebook_likes: Number of likes of the Actor_2 on his/her Facebook Page 26. imdb_score: IMDB Score of the movie on IMDB 27. movie_facebook_likes: Number of Facebook likes in the movie page 28. aspect_ratio: Aspect ratio the movie was made in Importing Libraries
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib
%matplotlib inline
from matplotlib import pyplot as plt
import plotly.graph_objs as go
import plotly.offline as py
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Reading the Data
###Code
Data = pd.read_csv("movie_metadata.csv")
Data.head()
Data.shape
###Output
_____no_output_____
###Markdown
Summary of Data
###Code
Data.info()
Data.describe().T
###Output
_____no_output_____
###Markdown
Total Unique Value
###Code
Data.nunique()
###Output
_____no_output_____
###Markdown
Total Missing Values
###Code
total_missing = Data.isnull().sum()
percentge = total_missing/Data.isnull().count()
NAs = pd.concat([total_missing, percentge*100], axis = 1, keys = ('Total','Percentge(%)'))
NAs[NAs.Total>0].sort_values(by = 'Total', ascending = False)
###Output
_____no_output_____
###Markdown
* There are many null values present in our dataset. Exploratory Data Analysis Univariate Analysis 1. Color
###Code
Data['color'].value_counts()
sns.set_style("whitegrid")
plt.figure(figsize = (18,5))
sns.countplot(x = 'color', data = Data)
plt.title('Distribution plot', fontsize = 16)
plt.xlabel('color', fontsize = 16)
color = ['Color', 'Black and White']
data = [4815, 201]
plt.pie(data, labels = color, autopct = "%1.2f%%", shadow = True, startangle = 90)
plt.title("Distribution of Movies according to color", fontsize = 20)
plt.show()
###Output
_____no_output_____
###Markdown
* Almost 96% Movies are in Color whereas only 4% are in Black & white. 2. Genres
###Code
Data['genres'].value_counts().head(15)
sns.set_style("whitegrid")
plt.figure(figsize = (18,5))
Data['genres'].value_counts()[0:15].plot.bar()
plt.title('Distribution plot', fontsize = 16)
plt.xlabel('Genres', fontsize = 16)
###Output
_____no_output_____
###Markdown
* Movies with Drama Genre is the mostly available in the Dataset. 3. Language
###Code
Data['language'].value_counts()
sns.set_style("whitegrid")
plt.figure(figsize = (18,5))
Data['language'].value_counts()[0:15].plot.bar()
plt.title('Distribution plot', fontsize = 16)
plt.xlabel('Language', fontsize = 16)
###Output
_____no_output_____
###Markdown
* Majority of the movies are in English language, followed by French, Spanish and Hindi. 4. Country
###Code
Data['country'].value_counts().head(15)
sns.set_style("whitegrid")
plt.figure(figsize = (18,5))
Data['country'].value_counts()[0:15].plot.bar()
plt.title('Distribution plot', fontsize = 16)
plt.xlabel('Country', fontsize = 16)
###Output
_____no_output_____
###Markdown
* Majority of the movies are made in USA followed by UK, France and Canada. 5. Budget
###Code
Data['budget'].mean()
###Output
_____no_output_____
###Markdown
6. IMDB Score
###Code
Data['imdb_score'].mean()
###Output
_____no_output_____
###Markdown
* Average IMDB Rating of the movie is 6.44, on a scale from 1-10.
###Code
sns.set_style("whitegrid")
plt.figure(figsize = (18,5))
sns.distplot(Data['imdb_score'])
plt.title('Distribution Plot of IMDB Rating', fontsize = 16)
plt.xlabel('IMDB Rating', fontsize = 16)
###Output
_____no_output_____
###Markdown
7. Movie Title
###Code
Data['movie_title'].value_counts().head(15)
Data['title_year'].value_counts().head(15)
sns.set_style("whitegrid")
plt.figure(figsize = (18,5))
Data['title_year'].value_counts().head(15).plot.bar()
plt.title('Distribution plot', fontsize = 16)
plt.xlabel('Title Year', fontsize = 16)
Data['title_year'].isnull().sum()
Data['title_year'].fillna(0, inplace = True)
Data['title_year'] = Data['title_year'].apply(np.int64)
sns.set_style("whitegrid")
plt.figure(figsize = (18,5))
Data['title_year'].value_counts().head(15).plot.bar()
plt.title('Distribution plot', fontsize = 16)
plt.xlabel('Title Year', fontsize = 16)
###Output
_____no_output_____
###Markdown
* Maximum Movies were released in the year 2009.* Second position in maximum movies were released in the year 2014. Movie Recommendation System 1. Simple Recommender System (Based on IMDB Rating)
###Code
Data1 = Data.sort_values('imdb_score', ascending = False)
Data1[['movie_title', 'title_year', 'director_name', 'genres', 'language', 'imdb_score']].head(20)
###Output
_____no_output_____
###Markdown
* This is a very Simple Movie Recommender System. * Here, The IMDB Rating of the Movie is taken into account and recommendation is made, based on that. * According to this: 1. Towering Inferno has the Highest Rating of 9.5/10 & it is an English comedy movie. 2. Then follows The Shawshank Redemption, The Godfather and so on. 2. Content Based Recommender System * Creating a separate dataset for the recommender system.
###Code
Data_CB = Data[['director_name', 'actor_2_name', 'genres', 'title_year', 'actor_1_name', 'movie_title', 'actor_3_name']]
Data_CB.head()
###Output
_____no_output_____
###Markdown
* Formatting the Genres and Movie Title Columns.
###Code
Data_CB['genres'] = Data_CB['genres'].apply(lambda a: str(a).replace('|', ' '))
Data_CB['genres']
Data_CB['movie_title'][0]
Data_CB['movie_title'] = Data_CB['movie_title'].apply(lambda a:a[:-1])
Data_CB['movie_title'][0]
###Output
_____no_output_____
###Markdown
* Combined Features on which we will calculate Cosine Similarity
###Code
Data_CB['director_genre_actors'] = Data_CB['director_name']+' '+Data_CB['actor_1_name']+' '+' '+Data_CB['actor_2_name']+' '+Data_CB['actor_3_name']+' '+Data_CB['genres']
Data_CB.head()
Data_CB.isnull().sum()
Data_CB.fillna('', inplace = True)
Data_CB.isnull().sum()
###Output
_____no_output_____
###Markdown
Cosine Similarity : * Using the Cosine Similarity to calculate a Numeric Quantity that denotes the similarity between two movies. * Use the cosine similarity score since it is independent of magnitude and is relatively easy and fast to calculate. * Vectorizing and then Calculating Cosine Similarity
###Code
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
vec = CountVectorizer()
vec_matrix = vec.fit_transform(Data_CB['director_genre_actors'])
similarity = cosine_similarity(vec_matrix)
def recommend_movie(movie):
if movie not in Data_CB['movie_title'].unique():
return('Sorry! The movie you requested is not in our Database. Please check the spelling or try with some other movies')
else:
i = Data_CB.loc[Data_CB['movie_title'] == movie].index[0]
lst = list(enumerate(similarity[i]))
lst = sorted(lst, key = lambda x:x[1] ,reverse=True)
lst = lst[1:11]
l = []
year = []
for i in range(len(lst)):
a = lst[i][0]
l.append(Data_CB['movie_title'][a])
year.append(Data_CB['title_year'][a])
plt.figure(figsize = (10,5))
plt.bar(l, [i[1] for i in lst])
plt.xticks(rotation = 90)
plt.xlabel('Movies similar to → '+movie, fontsize = 12, fontweight = "bold")
plt.ylabel('cosine scores', fontsize = 12, fontweight = "bold")
plt.show()
df2 = pd.DataFrame({'Movies Recommended':l, 'Year':year})
df2.drop_duplicates
return df2
Data_CB['movie_title'].sample(10)
plt.figure(figsize = (35,10))
recommend_movie('The Kids Are All Right')
recommend_movie('The Godfather')
recommend_movie('Avatar')
recommend_movie('The Dark Knight Rises')
recommend_movie("Pirates of the Caribbean: At World's End")
###Output
_____no_output_____
###Markdown
Task 4: Decision Tree Algorithm By MITHIL Objective:For the given ‘Iris’ dataset, create the Decision Tree classifier andvisualize it graphically. The purpose is if we feed any new data to thisclassifier, it would be able to predict the right class accordingly. Importing all the required libraries:
###Code
import pandas as pd
from sklearn import preprocessing
from sklearn.tree import DecisionTreeClassifier
from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import pydotplus
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
import numpy as np
###Output
_____no_output_____
###Markdown
Loading the dataset
###Code
data=pd.read_csv("iris.csv")
data.head()
###Output
_____no_output_____
###Markdown
**Observation:** We can see that the labels that we need to predict are in the form of text. We do label encoding to transform texts to numerical categorical values. Label Encoding:
###Code
le = preprocessing.LabelEncoder()
data["labels"]=le.fit_transform(data["Species"])
data.head()
###Output
_____no_output_____
###Markdown
**Observation:** These are the numerical categorical values for the Species label after one-hot encoding.
###Code
data['labels'].unique()
###Output
_____no_output_____
###Markdown
**Observation:** These are the unique class lables we need to predict for our model.
###Code
X=data.drop(['Id','Species','labels'],axis=1)
y=data['labels']
###Output
_____no_output_____
###Markdown
Splitting the data to train and test.
###Code
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=10)
###Output
_____no_output_____
###Markdown
Train the modelWe train the Decision Tree Classifier and see its f1-score for multiclass labels.
###Code
model=DecisionTreeClassifier()
model.fit(X_train,y_train)
ytrain_preds=model.predict(X_train)
ytest_preds=model.predict(X_test)
#print(ytrain_preds)
train_score=f1_score(np.array(y_train),ytrain_preds,average="macro")
test_score=f1_score(np.array(y_test),ytest_preds,average="macro")
print(model)
print("The f1-score of training data is {}".format(train_score))
print("The f1-score of test data is {}".format(test_score))
###Output
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=None,
splitter='best')
The f1-score of training data is 1.0
The f1-score of test data is 0.9644444444444445
###Markdown
Visualization of our trained model.We visualize the model in a flow diagramatic way to see how the model predicts the classes of iris flowers from the below structure
###Code
dot_data = StringIO()
export_graphviz(model, out_file=dot_data, feature_names=data.columns[1:-2],
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
###Output
_____no_output_____ |
utils/data/data_augmentation_shanghai_tech_part_b.ipynb | ###Markdown
1. Setup
###Code
import sys
sys.path.append('../..')
import albumentations as A
import matplotlib.pyplot as plt
import numpy as np
import os
import shutil
import warnings
from annotations import *
from utils.data.data_augmentation import *
from utils.data.data_ops import move_val_split_to_train
from utils.input_output.io import load_np_arrays, load_images
from utils.input_output.io import load_gt_counts
from utils.visualization.vis import plot_aug4
%matplotlib inline
%load_ext autoreload
%autoreload 2
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
2. ShanghaiTech (Part b) Dataset
###Code
DATASET_NAME = 'shanghai_tech/part_b'
DATASET_PATH = f'../../datasets/{DATASET_NAME}'
TRAIN_PATH = f'{DATASET_PATH}/train'
TRAIN_IMG_PATH = f'{TRAIN_PATH}/images'
TRAIN_GT_DOTS_PATH = f'{TRAIN_PATH}/gt_dots'
TRAIN_GT_COUNTS_PATH = f'{TRAIN_PATH}/gt_counts'
TRAIN_GT_DENSITY_MAPS_PATH = f'{TRAIN_PATH}/gt_density_maps'
VAL_PATH = f'{DATASET_PATH}/val'
TEST_PATH = f'{DATASET_PATH}/test'
TEST_IMG_PATH = f'{TEST_PATH}/images'
TEST_GT_DOTS_PATH = f'{TEST_PATH}/gt_dots'
TEST_GT_COUNTS_PATH = f'{TEST_PATH}/gt_counts'
TEST_GT_DENSITY_MAPS_PATH = f'{TEST_PATH}/gt_density_maps'
#move_val_split_to_train(VAL_PATH, TRAIN_PATH)
print(DATASET_PATH)
print(os.listdir(DATASET_PATH))
print(TRAIN_PATH)
print(os.listdir(TRAIN_PATH))
train_img_names = sorted(os.listdir(TRAIN_IMG_PATH))
train_dots_names = sorted(os.listdir(TRAIN_GT_DOTS_PATH))
test_img_names = sorted(os.listdir(TEST_IMG_PATH))
test_dots_names = sorted(os.listdir(TEST_GT_DOTS_PATH))
print(f'train split: {len(train_img_names)} images')
print(train_img_names[:3])
print(train_dots_names[:3])
print(f'\ntest split: {len(test_img_names)} images')
print(test_img_names[:3])
print(test_dots_names[:3])
train_dots_names = sorted(os.listdir(TRAIN_GT_DOTS_PATH))
test_dots_names = sorted(os.listdir(TEST_GT_DOTS_PATH))
print(TRAIN_GT_DOTS_PATH)
print(train_dots_names[:5])
print(TEST_GT_DOTS_PATH)
print(test_dots_names[:5])
###Output
../../datasets/shanghai_tech/part_b/train/gt_dots
['IMG_1.png', 'IMG_10.png', 'IMG_100.png', 'IMG_103.png', 'IMG_104.png']
../../datasets/shanghai_tech/part_b/test/gt_dots
['IMG_1.png', 'IMG_10.png', 'IMG_100.png', 'IMG_101.png', 'IMG_102.png']
###Markdown
Load some train images and density maps
###Code
train_images = load_images(TRAIN_IMG_PATH, train_img_names, num_images=10)
print(len(train_images))
print(train_images[0].dtype)
print(train_images[0].min(), train_images[0].max())
train_gt_density_maps = load_np_arrays(TRAIN_GT_DENSITY_MAPS_PATH, num=10)
print(len(train_gt_density_maps))
print(train_gt_density_maps.dtype)
for i in [8]:
img = train_images[i]
mask = train_gt_density_maps[i]
img_name = train_img_names[i]
aug_list1 = hflip(img, mask)
aug_list2 = rgb_shift(aug_list1, rseed=9001)
plot_aug4(aug_list1 + aug_list2)
SAVE_PATH = './aug_dir'
augment4_from_dir_and_save(in_path=TRAIN_PATH,
save_path=SAVE_PATH,
rseed=9001)
###Output
_____no_output_____ |
Release/donghyundavidchoi/20220224_StyleGAN3_Reactive_Audio_Choi_v1.ipynb | ###Markdown
``` 코드로 형식 지정됨```StyleGAN3 Based Audio Reactive Media Art Generator ModelBy Team TechART from AIFFEL X SeSAC InstallationGitclone StyleGAN3 and install requirements.StyleGAN3, ninja, torch 1.9.0, gdown 4.3
###Code
!git clone https://github.com/dvschultz/stylegan3.git
!git clone https://github.com/xinntao/Real-ESRGAN.git
!wget https://github.com/ninja-build/ninja/releases/download/v1.8.2/ninja-linux.zip
!sudo unzip ninja-linux.zip -d /usr/local/bin/
!sudo update-alternatives --install /usr/bin/ninja ninja /usr/local/bin/ninja 1 --force
!pip install torch==1.9.0
!pip install gdown==4.3
###Output
Collecting gdown==4.3
Downloading gdown-4.3.0.tar.gz (13 kB)
Installing build dependencies ... [?25l[?25hdone
Getting requirements to build wheel ... [?25l[?25hdone
Preparing wheel metadata ... [?25l[?25hdone
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from gdown==4.3) (1.15.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from gdown==4.3) (4.62.3)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.7/dist-packages (from gdown==4.3) (4.6.3)
Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from gdown==4.3) (3.6.0)
Requirement already satisfied: requests[socks] in /usr/local/lib/python3.7/dist-packages (from gdown==4.3) (2.23.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown==4.3) (2021.10.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown==4.3) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown==4.3) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown==4.3) (3.0.4)
Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown==4.3) (1.7.1)
Building wheels for collected packages: gdown
Building wheel for gdown (PEP 517) ... [?25l[?25hdone
Created wheel for gdown: filename=gdown-4.3.0-py3-none-any.whl size=14412 sha256=20056f74248e2515c3d93756039071c8f757376cc04f0f426c3b1d6f2c1c2192
Stored in directory: /root/.cache/pip/wheels/fd/ce/f8/389eafb78bce55ea78740dfcafc3c9da6f5e70d25c0377610d
Successfully built gdown
Installing collected packages: gdown
Attempting uninstall: gdown
Found existing installation: gdown 4.2.1
Uninstalling gdown-4.2.1:
Successfully uninstalled gdown-4.2.1
Successfully installed gdown-4.3.0
###Markdown
Import Requirements
###Code
import sys
sys.path.append('/content/stylegan3')
import os
import re
import glob
import shutil
import numpy as np
import scipy
from scipy.io import wavfile
from scipy.signal import savgol_filter
import matplotlib.pyplot as plt
import PIL
import moviepy.editor
import torch
import pickle
import random
###Output
Imageio: 'ffmpeg-linux64-v3.3.1' was not found on your computer; downloading it now.
Try 1. Download from https://github.com/imageio/imageio-binaries/raw/master/ffmpeg/ffmpeg-linux64-v3.3.1 (43.8 MB)
Downloading: 8192/45929032 bytes (0.0%)3825664/45929032 bytes (8.3%)7938048/45929032 bytes (17.3%)11993088/45929032 bytes (26.1%)14319616/45929032 bytes (31.2%)17498112/45929032 bytes (38.1%)21045248/45929032 bytes (45.8%)24969216/45929032 bytes (54.4%)28983296/45929032 bytes (63.1%)32792576/45929032 bytes (71.4%)36798464/45929032 bytes (80.1%)40681472/45929032 bytes (88.6%)44711936/45929032 bytes (97.4%)45929032/45929032 bytes (100.0%)
Done
File saved as /root/.imageio/ffmpeg/ffmpeg-linux64-v3.3.1.
###Markdown
Load ContentsLoad trained model and wav file to create a media art
###Code
!gdown --fuzzy https://drive.google.com/file/d/1_Cneq6wuh2f8_rKES1rbuFT5wYTqpXwD/view?usp=sharing
!gdown --fuzzy https://drive.google.com/file/d/15kx9SgWin7OCXQovGzvXr_d3l04bhZ6y/view?usp=sharing
!gdown --fuzzy https://drive.google.com/file/d/1wHjX4oFzwbvWYsKzeC0GsVd3jrFnnpfA/view?usp=sharing
!gdown --fuzzy https://drive.google.com/file/d/1ea8UuF3X22ikDjSKC7pB2VPhCAtWUZH3/view?usp=sharing
!gdown --fuzzy https://drive.google.com/file/d/1dth8edwCGqnAB0h9GoXxT4FxfEeZOYjE/view?usp=sharing
!wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P /content/Real-ESRGAN/experiments/pretrained_models
###Output
--2022-02-24 01:24:21-- https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth
Resolving github.com (github.com)... 140.82.121.4
Connecting to github.com (github.com)|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/387326890/08f0e941-ebb7-48f0-9d6a-73e87b710e7e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220224%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220224T012421Z&X-Amz-Expires=300&X-Amz-Signature=fd5de3eb754f0fffb36d6c7893cd717abc9755dd5c4c913688d4ed877a990ed9&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=387326890&response-content-disposition=attachment%3B%20filename%3DRealESRGAN_x4plus.pth&response-content-type=application%2Foctet-stream [following]
--2022-02-24 01:24:21-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/387326890/08f0e941-ebb7-48f0-9d6a-73e87b710e7e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220224%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220224T012421Z&X-Amz-Expires=300&X-Amz-Signature=fd5de3eb754f0fffb36d6c7893cd717abc9755dd5c4c913688d4ed877a990ed9&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=387326890&response-content-disposition=attachment%3B%20filename%3DRealESRGAN_x4plus.pth&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 67040989 (64M) [application/octet-stream]
Saving to: ‘/content/Real-ESRGAN/experiments/pretrained_models/RealESRGAN_x4plus.pth’
RealESRGAN_x4plus.p 100%[===================>] 63.93M 365MB/s in 0.2s
2022-02-24 01:24:21 (365 MB/s) - ‘/content/Real-ESRGAN/experiments/pretrained_models/RealESRGAN_x4plus.pth’ saved [67040989/67040989]
###Markdown
Set Deviceset cuda as default device
###Code
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
###Output
_____no_output_____
###Markdown
Audio PreprocessLoad audio file and plot the waveform.Albe to adjust some parameters.1. fps2. window_length3. polyorder4. compressionPrecisely adjust parameters to achieve satisfactory waveform you desire.
###Code
wav_filename = "/content/forest10s.wav"
audio = {}
fps = 24 # 영상의 초당 프레임 수 설정
# 파형 민감도 설정 / window_length must be an odd number / polyorder must be smaller than window_length
window_length = 33
polyorder = 3
compression = 1/2
if not os.path.exists(wav_filename):
audio_clip = moviepy.editor.AudioFileClip(wav_filename)
audio_clip.write_audiofile(wav_filename, fps=44100, nbytes=2, codec='pcm_s16le')
track_name = os.path.basename(wav_filename)[:-4]
rate, signal = wavfile.read(wav_filename)
signal = np.mean(signal, axis=1)
signal = np.abs(signal)
duration = signal.shape[0] / rate
frames = int(np.ceil(duration * fps))
samples_per_frame = signal.shape[0] / frames
audio[track_name] = np.zeros(frames, dtype=signal.dtype)
for frame in range(frames):
start = int(round(frame * samples_per_frame))
stop = int(round((frame + 1) * samples_per_frame))
audio[track_name][frame] = np.mean(signal[start:stop], axis=0)
audio[track_name] = audio[track_name] ** compression
audio[track_name] = savgol_filter(audio[track_name], window_length, polyorder)
audio[track_name] = audio[track_name] / max(audio[track_name])
print("Total frames : ", frames)
for track in sorted(audio.keys()):
plt.figure(figsize=(15, 5))
plt.title(track)
plt.plot(audio[track])
plt.savefig(f'../{track}.png')
###Output
Total frames : 244
###Markdown
FunctionsSome functions for media art generator
###Code
def load_networks(path):
with open(path, 'rb') as stream:
G = pickle.load(stream)['G_ema'].to(device)
G.eval()
return G
#----------------------------------------------------------------------------
def audio_reactive_linear(v0, v1, f):
return (v0*(1.0-f)+v1*f)
#----------------------------------------------------------------------------
def seed_generator(size):
result = []
for v in range(size):
result.append(random.randint(0, 1000))
return result
#----------------------------------------------------------------------------
def generate_images(seeds_top, seeds_bottom, truncation_psi, output_filename):
# produce z noise
z_t = torch.from_numpy(np.stack([np.random.RandomState(seed).randn(G.z_dim) for seed in seeds_top])).to(device)
z_b = torch.from_numpy(np.stack([np.random.RandomState(seed).randn(G.z_dim) for seed in seeds_bottom])).to(device)
# w mapping
w_t = G.mapping(z_t, None, truncation_value)
w_b = G.mapping(z_b, None, truncation_value)
# interpolation
x_t = np.linspace(0, frames, len(seeds_top), endpoint=True)
x_b = np.linspace(0, frames, len(seeds_bottom), endpoint=True)
y_t = [w.cpu().numpy() for w in w_t]
y_b = [w.cpu().numpy() for w in w_b]
w_t_i = scipy.interpolate.interp1d(x_t, y_t, kind='cubic', axis=0)
w_t_v = w_t_i(np.arange(frames))
w_b_i = scipy.interpolate.interp1d(x_b, y_b, kind='cubic', axis=0)
w_b_v = w_b_i(np.arange(frames))
# audio reactive
dlatents = []
for f in range(frames):
dlatents.append(audio_reactive_linear(w_b_v[f],w_t_v[f],audio[track_name][f]))
# temperal directory
if os.path.isdir('/content/temp'):
shutil.rmtree('/content/temp')
os.mkdir('/content/temp')
temp_dir = '/content/temp'
# image generation
dlatent_avg = G.mapping.w_avg # [component]
for row, dlatent in enumerate(dlatents):
count = row + 1
dl = (torch.from_numpy(dlatent).to(device) - dlatent_avg)*truncation_psi + dlatent_avg
row_images = G.synthesis(ws=dl.unsqueeze(0) ,noise_mode='const')[0]
row_image = (row_images.permute(1,2,0)*127.5+128).clamp(0,255).to(torch.uint8)
row_image = row_image.cpu().numpy()
PIL.Image.fromarray(row_image, 'RGB').save('%s/frame%05d.png' % (temp_dir, row))
print('Generating images %d/%d ...' % (count, len(dlatents)))
# image to video with audio
mp4_filename = output_filename + '.mp4'
mp4_filename = os.path.join('/content', mp4_filename)
video = moviepy.editor.ImageSequenceClip(temp_dir, fps=fps)
audio_clip = moviepy.editor.AudioFileClip(wav_filename)
video = video.set_audio(audio_clip)
video.write_videofile(mp4_filename, fps=fps, codec='libx264', audio_codec='aac', bitrate='5M')
# remove temperal directory and file
shutil.rmtree('/content/temp')
###Output
_____no_output_____
###Markdown
Load trained modelLoad pikle file you want to use for media art generation
###Code
network_pkl = '/content/awesome_beach.pkl'
G = load_networks(network_pkl)
###Output
_____no_output_____
###Markdown
Generate Images and Merge to Video Use Volume to interpolate between two seeds listschange seeds number to manage the flow velocity.
###Code
seeds_top_num = 20
seeds_bottom_num = 4
seeds_top = seed_generator(seeds_top_num)
seeds_bottom = seed_generator(seeds_bottom_num)
truncation_value = 1
generate_images(seeds_top, seeds_bottom, truncation_value, '20220224_01_test')
###Output
Setting up PyTorch plugin "bias_act_plugin"... Done.
Generating images 1/244 ...
Setting up PyTorch plugin "filtered_lrelu_plugin"... Done.
Generating images 2/244 ...
Generating images 3/244 ...
Generating images 4/244 ...
Generating images 5/244 ...
Generating images 6/244 ...
Generating images 7/244 ...
Generating images 8/244 ...
Generating images 9/244 ...
Generating images 10/244 ...
Generating images 11/244 ...
Generating images 12/244 ...
Generating images 13/244 ...
Generating images 14/244 ...
Generating images 15/244 ...
Generating images 16/244 ...
Generating images 17/244 ...
Generating images 18/244 ...
Generating images 19/244 ...
Generating images 20/244 ...
Generating images 21/244 ...
Generating images 22/244 ...
Generating images 23/244 ...
Generating images 24/244 ...
Generating images 25/244 ...
Generating images 26/244 ...
Generating images 27/244 ...
Generating images 28/244 ...
Generating images 29/244 ...
Generating images 30/244 ...
Generating images 31/244 ...
Generating images 32/244 ...
Generating images 33/244 ...
Generating images 34/244 ...
Generating images 35/244 ...
Generating images 36/244 ...
Generating images 37/244 ...
Generating images 38/244 ...
Generating images 39/244 ...
Generating images 40/244 ...
Generating images 41/244 ...
Generating images 42/244 ...
Generating images 43/244 ...
Generating images 44/244 ...
Generating images 45/244 ...
Generating images 46/244 ...
Generating images 47/244 ...
Generating images 48/244 ...
Generating images 49/244 ...
Generating images 50/244 ...
Generating images 51/244 ...
Generating images 52/244 ...
Generating images 53/244 ...
Generating images 54/244 ...
Generating images 55/244 ...
Generating images 56/244 ...
Generating images 57/244 ...
Generating images 58/244 ...
Generating images 59/244 ...
Generating images 60/244 ...
Generating images 61/244 ...
Generating images 62/244 ...
Generating images 63/244 ...
Generating images 64/244 ...
Generating images 65/244 ...
Generating images 66/244 ...
Generating images 67/244 ...
Generating images 68/244 ...
Generating images 69/244 ...
Generating images 70/244 ...
Generating images 71/244 ...
Generating images 72/244 ...
Generating images 73/244 ...
Generating images 74/244 ...
Generating images 75/244 ...
Generating images 76/244 ...
Generating images 77/244 ...
Generating images 78/244 ...
Generating images 79/244 ...
Generating images 80/244 ...
Generating images 81/244 ...
Generating images 82/244 ...
Generating images 83/244 ...
Generating images 84/244 ...
Generating images 85/244 ...
Generating images 86/244 ...
Generating images 87/244 ...
Generating images 88/244 ...
Generating images 89/244 ...
Generating images 90/244 ...
Generating images 91/244 ...
Generating images 92/244 ...
Generating images 93/244 ...
Generating images 94/244 ...
Generating images 95/244 ...
Generating images 96/244 ...
Generating images 97/244 ...
Generating images 98/244 ...
Generating images 99/244 ...
Generating images 100/244 ...
Generating images 101/244 ...
Generating images 102/244 ...
Generating images 103/244 ...
Generating images 104/244 ...
Generating images 105/244 ...
Generating images 106/244 ...
Generating images 107/244 ...
Generating images 108/244 ...
Generating images 109/244 ...
Generating images 110/244 ...
Generating images 111/244 ...
Generating images 112/244 ...
Generating images 113/244 ...
Generating images 114/244 ...
Generating images 115/244 ...
Generating images 116/244 ...
Generating images 117/244 ...
Generating images 118/244 ...
Generating images 119/244 ...
Generating images 120/244 ...
Generating images 121/244 ...
Generating images 122/244 ...
Generating images 123/244 ...
Generating images 124/244 ...
Generating images 125/244 ...
Generating images 126/244 ...
Generating images 127/244 ...
Generating images 128/244 ...
Generating images 129/244 ...
Generating images 130/244 ...
Generating images 131/244 ...
Generating images 132/244 ...
Generating images 133/244 ...
Generating images 134/244 ...
Generating images 135/244 ...
Generating images 136/244 ...
Generating images 137/244 ...
Generating images 138/244 ...
Generating images 139/244 ...
Generating images 140/244 ...
Generating images 141/244 ...
Generating images 142/244 ...
Generating images 143/244 ...
Generating images 144/244 ...
Generating images 145/244 ...
Generating images 146/244 ...
Generating images 147/244 ...
Generating images 148/244 ...
Generating images 149/244 ...
Generating images 150/244 ...
Generating images 151/244 ...
Generating images 152/244 ...
Generating images 153/244 ...
Generating images 154/244 ...
Generating images 155/244 ...
Generating images 156/244 ...
Generating images 157/244 ...
Generating images 158/244 ...
Generating images 159/244 ...
Generating images 160/244 ...
Generating images 161/244 ...
Generating images 162/244 ...
Generating images 163/244 ...
Generating images 164/244 ...
Generating images 165/244 ...
Generating images 166/244 ...
Generating images 167/244 ...
Generating images 168/244 ...
Generating images 169/244 ...
Generating images 170/244 ...
Generating images 171/244 ...
Generating images 172/244 ...
Generating images 173/244 ...
Generating images 174/244 ...
Generating images 175/244 ...
Generating images 176/244 ...
Generating images 177/244 ...
Generating images 178/244 ...
Generating images 179/244 ...
Generating images 180/244 ...
Generating images 181/244 ...
Generating images 182/244 ...
Generating images 183/244 ...
Generating images 184/244 ...
Generating images 185/244 ...
Generating images 186/244 ...
Generating images 187/244 ...
Generating images 188/244 ...
Generating images 189/244 ...
Generating images 190/244 ...
Generating images 191/244 ...
Generating images 192/244 ...
Generating images 193/244 ...
Generating images 194/244 ...
Generating images 195/244 ...
Generating images 196/244 ...
Generating images 197/244 ...
Generating images 198/244 ...
Generating images 199/244 ...
Generating images 200/244 ...
Generating images 201/244 ...
Generating images 202/244 ...
Generating images 203/244 ...
Generating images 204/244 ...
Generating images 205/244 ...
Generating images 206/244 ...
Generating images 207/244 ...
Generating images 208/244 ...
Generating images 209/244 ...
Generating images 210/244 ...
Generating images 211/244 ...
Generating images 212/244 ...
Generating images 213/244 ...
Generating images 214/244 ...
Generating images 215/244 ...
Generating images 216/244 ...
Generating images 217/244 ...
Generating images 218/244 ...
Generating images 219/244 ...
Generating images 220/244 ...
Generating images 221/244 ...
Generating images 222/244 ...
Generating images 223/244 ...
Generating images 224/244 ...
Generating images 225/244 ...
Generating images 226/244 ...
Generating images 227/244 ...
Generating images 228/244 ...
Generating images 229/244 ...
Generating images 230/244 ...
Generating images 231/244 ...
Generating images 232/244 ...
Generating images 233/244 ...
Generating images 234/244 ...
Generating images 235/244 ...
Generating images 236/244 ...
Generating images 237/244 ...
Generating images 238/244 ...
Generating images 239/244 ...
Generating images 240/244 ...
Generating images 241/244 ...
Generating images 242/244 ...
Generating images 243/244 ...
Generating images 244/244 ...
[MoviePy] >>>> Building video /content/20220224_01_test.mp4
[MoviePy] Writing audio in 20220224_01_testTEMP_MPY_wvf_snd.mp4
|
Chapter 08 - Heat Equations/.ipynb_checkpoints/Heat Equation- Crank-Nicolson in notes-checkpoint.ipynb | ###Markdown
Heat Equation The Differential Equation$$ \frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2}$$ Initial Condition$$ u(x,0)=2x, \ \ 0 \leq x \leq \frac{1}{2} $$$$ u(x,0)=2(1-x), \ \ \frac{1}{2} \leq x \leq 1 $$ Boundary Condition$$ u(0,t)=0, u(1,t)=0 $$ The Difference Equation$$ w[i,j+1] = w[i,j+1] + \frac{1}{2}\left(\frac{k}{h^2}(w[i+1,j+1]-2w[i,j+1]+w[i-1,j+1])+\frac{k}{h^2}(w[i+1,j]-2w[i,j]+w[i-1,j])\right)$$$$ -rw[i-1,j+1]+(2+2r)w[i,j+1]-rw[i+1,j+1]=rw[i-1,j]+(2-2r)w[i,j]+rw[i+1,j]$$where $r=\frac{k}{h^2}$
###Code
# LIBRARY
# vector manipulation
import numpy as np
# math functions
import math
# THIS IS FOR PLOTTING
%matplotlib inline
import matplotlib.pyplot as plt # side-stepping mpl backend
import warnings
warnings.filterwarnings("ignore")
N=4
Nt=32
h=1/N
ht=1/Nt
time_iteration=100
time=np.arange(0,(time_iteration+.5)*ht,ht)
x=np.arange(0,1.0001,h)
w=np.zeros((N+1,time_iteration+1))
r=ht/(h*h)
A=np.zeros((N-1,N-1))
B=np.zeros((N-1,N-1))
c=np.zeros(N-1)
b=np.zeros(N-1)
b[0]=0
# Initial Condition
for i in range (1,N):
w[i,0]=4*x[i]*x[i]-4*x[i]+1
#w[i,0]=2*x[i]
#if x[i]>0.5:
# w[i,0]=2*(1-x[i])
# Boundary Condition
for k in range (0,time_iteration):
w[0,k]=1
w[N,k]=1
for i in range (0,N-1):
A[i,i]=2+2*r
B[i,i]=2-2*r
for i in range (0,N-2):
A[i+1,i]=-r
A[i,i+1]=-r
B[i+1,i]=r
B[i,i+1]=r
plt.show()
Ainv=np.linalg.inv(A)
C=np.dot(Ainv,B)
for k in range (1,time_iteration+1):
w[1:(N),k]=np.dot(C,w[1:(N),k-1])
print(x)
print(w[:,0])
print(w[:,1])
print(A)
print(B)
print(w[:,2])
print(w[:,3])
print(w[:,4])
print(w[:,5])
print(time)
fig = plt.figure(figsize=(8,4))
plt.plot(w)
plt.xlabel('x')
plt.ylabel('w')
fig = plt.figure()
plt.imshow(w.transpose())
plt.xticks(np.arange(len(x)), x)
plt.yticks(np.arange(len(time)), time)
plt.xlabel('x')
plt.ylabel('time')
clb=plt.colorbar()
clb.set_label('w')
plt.show()
###Output
[ 0. 0.25 0.5 0.75 1. ]
[ 1. 0.25 0. 0.25 1. ]
[ 1. 0.10294118 0.11764706 0.10294118 1. ]
[[ 3. -0.5 0. ]
[-0.5 3. -0.5]
[ 0. -0.5 3. ]]
[[ 1. 0.5 0. ]
[ 0.5 1. 0.5]
[ 0. 0.5 1. ]]
[ 1. 0.0700692 0.09688581 0.0700692 1. ]
[ 1. 0.05164869 0.0728679 0.05164869 1. ]
[ 1. 0.0384125 0.05430969 0.0384125 1. ]
[ 1. 0.02859566 0.04043928 0.02859566 1. ]
[ 0. 0.03125 0.0625 0.09375 0.125 0.15625 0.1875 0.21875
0.25 0.28125 0.3125 0.34375 0.375 0.40625 0.4375 0.46875
0.5 0.53125 0.5625 0.59375 0.625 0.65625 0.6875 0.71875
0.75 0.78125 0.8125 0.84375 0.875 0.90625 0.9375 0.96875
1. 1.03125 1.0625 1.09375 1.125 1.15625 1.1875 1.21875
1.25 1.28125 1.3125 1.34375 1.375 1.40625 1.4375 1.46875
1.5 1.53125 1.5625 1.59375 1.625 1.65625 1.6875 1.71875
1.75 1.78125 1.8125 1.84375 1.875 1.90625 1.9375 1.96875
2. 2.03125 2.0625 2.09375 2.125 2.15625 2.1875 2.21875
2.25 2.28125 2.3125 2.34375 2.375 2.40625 2.4375 2.46875
2.5 2.53125 2.5625 2.59375 2.625 2.65625 2.6875 2.71875
2.75 2.78125 2.8125 2.84375 2.875 2.90625 2.9375 2.96875
3. 3.03125 3.0625 3.09375 3.125 ]
|
Naas/Naas_Emailbuilder_demo.ipynb | ###Markdown
Naas - Emailbuilder demo **Tags:** naas emailbuilder Input Import libraries
###Code
import naas_drivers
import naas
import pandas as pd
###Output
_____no_output_____
###Markdown
Variables
###Code
# List to emails address of the receiver(s)
email_to = [""]
# Email sender : Can only take your email account or [email protected]
email_from = ""
# Email subject
subject = "My Object"
###Output
_____no_output_____
###Markdown
Model Build the email
###Code
table = pd.DataFrame({
"Table Header 1": ["Left element 1", "Left element 2", "Left element 3"],
"Table Header 2": ["Right element 1", "Right element 2", "Right element 3"]
})
link = "https://www.naas.ai/"
img = "https://gblobscdn.gitbook.com/spaces%2F-MJ1rzHSMrn3m7xaPUs_%2Favatar-1602072063433.png?alt=media"
list_bullet = ["First element",
"Second element",
"Third element",
naas_drivers.emailbuilder.link(link, "Fourth element"),
]
footer_icons = [{
"img_src": img,
"href": link
}]
email_content = {
'element': naas_drivers.emailbuilder.title("This is a title"),
'heading': naas_drivers.emailbuilder.heading("This is a heading"),
'subheading': naas_drivers.emailbuilder.subheading("This is a subheading"),
'text': naas_drivers.emailbuilder.text("This is a text"),
'link': naas_drivers.emailbuilder.link(link, "This is a link"),
'button': naas_drivers.emailbuilder.button(link, "This is a button"),
'list': naas_drivers.emailbuilder.list(list_bullet),
'table': naas_drivers.emailbuilder.table(table, header=True, border=True),
'image': naas_drivers.emailbuilder.image(img),
'footer': naas_drivers.emailbuilder.footer_company(networks=footer_icons, company=["Company informations"], legal=["Legal informations"])
}
content = naas_drivers.emailbuilder.generate(display='iframe',
**email_content)
###Output
_____no_output_____
###Markdown
Output Send the email
###Code
naas.notification.send(email_to=email_to,
subject=subject,
html=content,
email_from=email_from)
###Output
_____no_output_____
###Markdown
Naas - Emailbuilder demo **Tags:** naas emailbuilder Input Import libraries
###Code
import naas_drivers
import naas
import pandas as pd
###Output
_____no_output_____
###Markdown
Variables
###Code
# List to emails address of the receiver(s)
email_to = [""]
# Email sender : Can only take your email account or [email protected]
email_from = ""
# Email subject
subject = "My Object"
###Output
_____no_output_____
###Markdown
Model Build the email
###Code
table = pd.DataFrame({
"Table Header 1": ["Left element 1", "Left element 2", "Left element 3"],
"Table Header 2": ["Right element 1", "Right element 2", "Right element 3"]
})
link = "https://www.naas.ai/"
img = "https://gblobscdn.gitbook.com/spaces%2F-MJ1rzHSMrn3m7xaPUs_%2Favatar-1602072063433.png?alt=media"
list_bullet = ["First element",
"Second element",
"Third element",
naas_drivers.emailbuilder.link(link, "Fourth element"),
]
footer_icons = [{
"img_src": img,
"href": link
}]
email_content = {
'element': naas_drivers.emailbuilder.title("This is a title"),
'heading': naas_drivers.emailbuilder.heading("This is a heading"),
'subheading': naas_drivers.emailbuilder.subheading("This is a subheading"),
'text': naas_drivers.emailbuilder.text("This is a text"),
'link': naas_drivers.emailbuilder.link(link, "This is a link"),
'button': naas_drivers.emailbuilder.button(link, "This is a button"),
'list': naas_drivers.emailbuilder.list(list_bullet),
'table': naas_drivers.emailbuilder.table(table, header=True, border=True),
'image': naas_drivers.emailbuilder.image(img),
'footer': naas_drivers.emailbuilder.footer_company(networks=footer_icons, company=["Company informations"], legal=["Legal informations"])
}
content = naas_drivers.emailbuilder.generate(display='iframe',
**email_content)
###Output
_____no_output_____
###Markdown
Output Send the email
###Code
naas.notification.send(email_to=email_to,
subject=subject,
html=content,
email_from=email_from)
###Output
_____no_output_____
###Markdown
Naas - Emailbuilder demo **Tags:** naas emailbuilder snippet operations **Author:** [Florent Ravenel](https://www.linkedin.com/in/ACoAABCNSioBW3YZHc2lBHVG0E_TXYWitQkmwog/) Input Import libraries
###Code
import naas_drivers
import naas
import pandas as pd
###Output
_____no_output_____
###Markdown
Variables
###Code
# List to emails address of the receiver(s)
email_to = [""]
# Email sender : Can only take your email account or [email protected]
email_from = ""
# Email subject
subject = "My Object"
###Output
_____no_output_____
###Markdown
Model Build the email
###Code
table = pd.DataFrame({
"Table Header 1": ["Left element 1", "Left element 2", "Left element 3"],
"Table Header 2": ["Right element 1", "Right element 2", "Right element 3"]
})
link = "https://www.naas.ai/"
img = "https://gblobscdn.gitbook.com/spaces%2F-MJ1rzHSMrn3m7xaPUs_%2Favatar-1602072063433.png?alt=media"
list_bullet = ["First element",
"Second element",
"Third element",
naas_drivers.emailbuilder.link(link, "Fourth element"),
]
footer_icons = [{
"img_src": img,
"href": link
}]
email_content = {
'element': naas_drivers.emailbuilder.title("This is a title"),
'heading': naas_drivers.emailbuilder.heading("This is a heading"),
'subheading': naas_drivers.emailbuilder.subheading("This is a subheading"),
'text': naas_drivers.emailbuilder.text("This is a text"),
'link': naas_drivers.emailbuilder.link(link, "This is a link"),
'button': naas_drivers.emailbuilder.button(link, "This is a button"),
'list': naas_drivers.emailbuilder.list(list_bullet),
'table': naas_drivers.emailbuilder.table(table, header=True, border=True),
'image': naas_drivers.emailbuilder.image(img),
'footer': naas_drivers.emailbuilder.footer_company(networks=footer_icons, company=["Company informations"], legal=["Legal informations"])
}
content = naas_drivers.emailbuilder.generate(display='iframe',
**email_content)
###Output
_____no_output_____
###Markdown
Output Send the email
###Code
naas.notification.send(email_to=email_to,
subject=subject,
html=content,
email_from=email_from)
###Output
_____no_output_____
###Markdown
Naas - Emailbuilder demo Input
###Code
# List to emails address of the receiver(s)
email_to = [""]
# Email sender : Can only take your email account or [email protected]
email_from = ""
# Email subject
subject = "My Object"
###Output
_____no_output_____
###Markdown
Model
###Code
import naas_drivers
import naas
import pandas as pd
table = pd.DataFrame({
"Table Header 1": ["Left element 1", "Left element 2", "Left element 3"],
"Table Header 2": ["Right element 1", "Right element 2", "Right element 3"]
})
link = "https://www.naas.ai/"
img = "https://gblobscdn.gitbook.com/spaces%2F-MJ1rzHSMrn3m7xaPUs_%2Favatar-1602072063433.png?alt=media"
list_bullet = ["First element",
"Second element",
"Third element",
naas_drivers.emailbuilder.link(link, "Fourth element"),
]
footer_icons = [{
"img_src": img,
"href": link
}]
email_content = {
'element': naas_drivers.emailbuilder.title("This is a title"),
'heading': naas_drivers.emailbuilder.heading("This is a heading"),
'subheading': naas_drivers.emailbuilder.subheading("This is a subheading"),
'text': naas_drivers.emailbuilder.text("This is a text"),
'link': naas_drivers.emailbuilder.link(link, "This is a link"),
'button': naas_drivers.emailbuilder.button(link, "This is a button"),
'list': naas_drivers.emailbuilder.list(list_bullet),
'table': naas_drivers.emailbuilder.table(table, header=True, border=True),
'image': naas_drivers.emailbuilder.image(img),
'footer': naas_drivers.emailbuilder.footer_company(networks=footer_icons, company=["Company informations"], legal=["Legal informations"])
}
content = naas_drivers.emailbuilder.generate(display='iframe',
**email_content)
###Output
_____no_output_____
###Markdown
Output
###Code
naas.notification.send(email_to=email_to,
subject=subject,
html=content,
email_from=email_from)
###Output
_____no_output_____
###Markdown
Naas - Emailbuilder demo Input
###Code
# List to emails address of the receiver(s)
email_to = [""]
# Email sender : Can only take your email account or [email protected]
email_from = ""
# Email subject
subject = "My Object"
###Output
_____no_output_____
###Markdown
Model
###Code
import naas_drivers
import naas
import pandas as pd
table = pd.DataFrame({
"Table Header 1": ["Left element 1", "Left element 2", "Left element 3"],
"Table Header 2": ["Right element 1", "Right element 2", "Right element 3"]
})
link = "https://www.naas.ai/"
img = "https://gblobscdn.gitbook.com/spaces%2F-MJ1rzHSMrn3m7xaPUs_%2Favatar-1602072063433.png?alt=media"
list_bullet = ["First element",
"Second element",
"Third element",
emailbuilder.link(link, "Fourth element"),
]
footer_icons = [{
"img_src": img,
"href": link
}]
email_content = {
'element': naas_drivers.emailbuilder.title("This is a title"),
'heading': naas_drivers.emailbuilder.heading("This is a heading"),
'subheading': naas_drivers.emailbuilder.subheading("This is a subheading"),
'text': naas_drivers.emailbuilder.text("This is a text"),
'link': naas_drivers.emailbuilder.link(link, "This is a link"),
'button': naas_drivers.emailbuilder.button(link, "This is a button"),
'list': naas_drivers.emailbuilder.list(list_bullet),
'table': naas_drivers.emailbuilder.table(table, header=True, border=True),
'image': naas_drivers.emailbuilder.image(img),
'footer': naas_drivers.emailbuilder.footer_company(networks=footer_icons, company=["Company informations"], legal=["Legal informations"])
}
content = naas_drivers.emailbuilder.generate(display='iframe',
**email_content)
###Output
_____no_output_____
###Markdown
Output
###Code
naas.notification.send(email_to,
subject,
content,
email_from)
###Output
_____no_output_____
###Markdown
Naas - Emailbuilder demo **Tags:** naas emailbuilder snippet **Author:** [Florent Ravenel](https://www.linkedin.com/in/ACoAABCNSioBW3YZHc2lBHVG0E_TXYWitQkmwog/) Input Import libraries
###Code
import naas_drivers
import naas
import pandas as pd
###Output
_____no_output_____
###Markdown
Variables
###Code
# List to emails address of the receiver(s)
email_to = [""]
# Email sender : Can only take your email account or [email protected]
email_from = ""
# Email subject
subject = "My Object"
###Output
_____no_output_____
###Markdown
Model Build the email
###Code
table = pd.DataFrame({
"Table Header 1": ["Left element 1", "Left element 2", "Left element 3"],
"Table Header 2": ["Right element 1", "Right element 2", "Right element 3"]
})
link = "https://www.naas.ai/"
img = "https://gblobscdn.gitbook.com/spaces%2F-MJ1rzHSMrn3m7xaPUs_%2Favatar-1602072063433.png?alt=media"
list_bullet = ["First element",
"Second element",
"Third element",
naas_drivers.emailbuilder.link(link, "Fourth element"),
]
footer_icons = [{
"img_src": img,
"href": link
}]
email_content = {
'element': naas_drivers.emailbuilder.title("This is a title"),
'heading': naas_drivers.emailbuilder.heading("This is a heading"),
'subheading': naas_drivers.emailbuilder.subheading("This is a subheading"),
'text': naas_drivers.emailbuilder.text("This is a text"),
'link': naas_drivers.emailbuilder.link(link, "This is a link"),
'button': naas_drivers.emailbuilder.button(link, "This is a button"),
'list': naas_drivers.emailbuilder.list(list_bullet),
'table': naas_drivers.emailbuilder.table(table, header=True, border=True),
'image': naas_drivers.emailbuilder.image(img),
'footer': naas_drivers.emailbuilder.footer_company(networks=footer_icons, company=["Company informations"], legal=["Legal informations"])
}
content = naas_drivers.emailbuilder.generate(display='iframe',
**email_content)
###Output
_____no_output_____
###Markdown
Output Send the email
###Code
naas.notification.send(email_to=email_to,
subject=subject,
html=content,
email_from=email_from)
###Output
_____no_output_____
###Markdown
Naas - Emailbuilder demo **Tags:** naas emailbuilder snippet operations **Author:** [Florent Ravenel](https://www.linkedin.com/in/ACoAABCNSioBW3YZHc2lBHVG0E_TXYWitQkmwog/) Input Import libraries
###Code
import naas_drivers
import naas
import pandas as pd
###Output
_____no_output_____
###Markdown
Variables
###Code
# List to emails address of the receiver(s)
email_to = [""]
# Email sender : Can only take your email account or [email protected]
email_from = ""
# Email subject
subject = "My Object"
###Output
_____no_output_____
###Markdown
Model Build the email
###Code
table = pd.DataFrame({
"Table Header 1": ["Left element 1", "Left element 2", "Left element 3"],
"Table Header 2": ["Right element 1", "Right element 2", "Right element 3"]
})
link = "https://www.naas.ai/"
img = "https://gblobscdn.gitbook.com/spaces%2F-MJ1rzHSMrn3m7xaPUs_%2Favatar-1602072063433.png?alt=media"
list_bullet = ["First element",
"Second element",
"Third element",
naas_drivers.emailbuilder.link(link, "Fourth element"),
]
footer_icons = [{
"img_src": img,
"href": link
}]
email_content = {
'element': naas_drivers.emailbuilder.title("This is a title"),
'heading': naas_drivers.emailbuilder.heading("This is a heading"),
'subheading': naas_drivers.emailbuilder.subheading("This is a subheading"),
'text': naas_drivers.emailbuilder.text("This is a text"),
'link': naas_drivers.emailbuilder.link(link, "This is a link"),
'button': naas_drivers.emailbuilder.button(link, "This is a button"),
'list': naas_drivers.emailbuilder.list(list_bullet),
'table': naas_drivers.emailbuilder.table(table, header=True, border=True),
'image': naas_drivers.emailbuilder.image(img),
'footer': naas_drivers.emailbuilder.footer_company(networks=footer_icons, company=["Company informations"], legal=["Legal informations"])
}
content = naas_drivers.emailbuilder.generate(display='iframe',
**email_content)
###Output
_____no_output_____
###Markdown
Output Send the email
###Code
naas.notification.send(email_to=email_to,
subject=subject,
html=content,
email_from=email_from)
###Output
_____no_output_____
###Markdown
Naas - Emailbuilder demo Input
###Code
# List to emails address of the receiver(s)
email_to = [""]
# Email sender : Can only take your email account or [email protected]
email_from = ""
# Email subject
subject = "My Object"
###Output
_____no_output_____
###Markdown
Model
###Code
import naas_drivers
import naas
import pandas as pd
table = pd.DataFrame({
"Table Header 1": ["Left element 1", "Left element 2", "Left element 3"],
"Table Header 2": ["Right element 1", "Right element 2", "Right element 3"]
})
link = "https://www.naas.ai/"
img = "https://gblobscdn.gitbook.com/spaces%2F-MJ1rzHSMrn3m7xaPUs_%2Favatar-1602072063433.png?alt=media"
list_bullet = ["First element",
"Second element",
"Third element",
emailbuilder.link(link, "Fourth element"),
]
footer_icons = [{
"img_src": img,
"href": link
}]
email_content = {
'element': naas_drivers.emailbuilder.title("This is a title"),
'heading': naas_drivers.emailbuilder.heading("This is a heading"),
'subheading': naas_drivers.emailbuilder.subheading("This is a subheading"),
'text': naas_drivers.emailbuilder.text("This is a text"),
'link': naas_drivers.emailbuilder.link(link, "This is a link"),
'button': naas_drivers.emailbuilder.button(link, "This is a button"),
'list': naas_drivers.emailbuilder.list(list_bullet),
'table': naas_drivers.emailbuilder.table(table, header=True, border=True),
'image': naas_drivers.emailbuilder.image(img),
'footer': naas_drivers.emailbuilder.footer_company(networks=footer_icons, company=["Company informations"], legal=["Legal informations"])
}
content = naas_drivers.emailbuilder.generate(display='iframe',
**email_content)
###Output
_____no_output_____
###Markdown
Output
###Code
naas.notification.send(email_to,
subject,
content,
email_from)
###Output
_____no_output_____ |
code/random_forest_iris.ipynb | ###Markdown
Random Forest Classifier - Iris Dataset[](https://github.com/eabarnes1010/course_objective_analysis/tree/main/code)[](https://colab.research.google.com/github/eabarnes1010/course_objective_analysis/blob/main/code/random_forest_iris.ipynb)* Iris example adapted from: https://www.datacamp.com/community/tutorials/random-forests-classifier-python* Further modified by: Aaron Hill and Wei-Ting Hsiao (Dept. of Atmospheric Science, Colorado State University), January 2020* Further adapted by: Prof. Elizabeth Barnes for ATS 655 and ATS 780A7 Spring 2022 at Colorado State University
###Code
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
print('IN_COLAB = ' + str(IN_COLAB))
import sys
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sklearn
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from sklearn.inspection import permutation_importance
from sklearn.tree import export_graphviz
import pydot
import matplotlib as mpl
mpl.rcParams["figure.facecolor"] = "white"
mpl.rcParams["figure.dpi"] = 150
print(f"python version = {sys.version}")
print(f"numpy version = {np.__version__}")
print(f"scikit-learn version = {sklearn.__version__}")
# Make the path of your own Google Drive accessible to save a figure
if IN_COLAB:
try:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
local_path = '/content/drive/My Drive/Colab Notebooks/'
except:
local_path = './'
else:
local_path = '../figures/'
#Load dataset
iris = datasets.load_iris()
print('target/labels: ' + str(iris.target_names))
print(' features: ' + str(iris.feature_names))
# create DataFrame so it looks nice for visualizing the data
data=pd.DataFrame({
'sepal length':iris.data[:,0],
'sepal width':iris.data[:,1],
'petal length':iris.data[:,2],
'petal width':iris.data[:,3],
'species':iris.target
})
data.head()
# Split data into training and testing
X=data[['sepal length', 'sepal width', 'petal length', 'petal width']] # Features
y=data['species'] # Labels
# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) # 70% training and 30% test
# Create and Train the Random Forest Classifier
#-------------------------------------------------------------------------------------------------
# MODIFY: important tunable parameters for model
number_of_trees = 30 # number of trees to "average" together to create a random forest
tree_depth = 3 # maximum depth allowed for each tree
node_split = 2 # minimum number of training samples needed to split a node
leaf_samples = 1 # minimum number of training samples required to make a leaf node
criterion = 'gini' # 'gini' or 'entropy'
bootstrap = False # whether to perform "bagging=bootstrap aggregating" or not
max_samples = None # number of samples to grab when training each tree IF bootstrap=True, otherwise None
RAND_STATE = 17
#-------------------------------------------------------------------------------------------------
rf=RandomForestClassifier(n_estimators=number_of_trees,
criterion=criterion,
random_state=RAND_STATE,
min_samples_split = node_split,
min_samples_leaf = leaf_samples,
max_depth = tree_depth,
bootstrap = bootstrap,
max_samples = max_samples,
)
# train the model using the training sets
rf.fit(X_train,y_train)
# make predictions on the test set
y_test_pred=rf.predict(X_test)
acc = metrics.accuracy_score(y_test, y_test_pred) # compute the accuracy on the test set
print("testing accuracy: ", np.around(acc*100), '%')
from graphviz import Source
fig_savename = 'RF_iris_tree'
tree = rf[-1]
export_graphviz(tree,
out_file=local_path + '/' + fig_savename+'.dot',
filled=True,
proportion=False,
leaves_parallel=False,
class_names=iris.target_names,
feature_names=iris.feature_names)
Source.from_file(local_path + fig_savename+'.dot')
###Output
_____no_output_____
###Markdown
Feature Importance
###Code
def calc_importances(rf, feature_list):
# Get numerical feature importances
importances = list(rf.feature_importances_)
# List of tuples with variable and importance
feature_importances = [(feature, round(importance, 2)) for feature, importance in zip(feature_list, importances)]
# Sort the feature importances by most important first
feature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True)
# Print out the feature and importances
print('')
[print('Variable: {:20} Importance: {}'.format(*pair)) for pair in feature_importances]
print('')
return importances
def plot_feat_importances(importances, feature_list):
plt.figure()
# Set the style
plt.style.use('fivethirtyeight')
# list of x locations for plotting
x_values = list(range(len(importances)))
# Make a bar chart
plt.barh(x_values, importances)
# Tick labels for x axis
plt.yticks(x_values, feature_list)
# Axis labels and title
plt.xlabel('Importance'); plt.ylabel('Variable'); plt.title('Variable Importances')
plot_feat_importances(calc_importances(rf, iris.feature_names), iris.feature_names)
###Output
Variable: petal width (cm) Importance: 0.52
Variable: petal length (cm) Importance: 0.4
Variable: sepal length (cm) Importance: 0.07
Variable: sepal width (cm) Importance: 0.01
###Markdown
Permutation Importance
###Code
# Single-pass permutation
permute = permutation_importance(rf, X, y, n_repeats=20,
random_state=RAND_STATE)
# Sort the importances
sorted_idx = permute.importances_mean.argsort()
def plot_perm_importances(permute, sorted_idx, feature_list):
# Sort the feature list based on
new_feature_list = []
for index in sorted_idx:
new_feature_list.append(feature_list[index])
fig, ax = plt.subplots()
ax.boxplot(permute.importances[sorted_idx].T,
vert=False, labels=new_feature_list)
ax.set_title("Permutation Importances")
fig.tight_layout()
plot_perm_importances(permute, sorted_idx, iris.feature_names)
###Output
_____no_output_____
###Markdown
Random Forest Classifier - Iris Dataset[](https://github.com/eabarnes1010/course_ml_ats/tree/main/code)[](https://colab.research.google.com/github/eabarnes1010/course_ml_ats/blob/main/code/random_forest_iris.ipynb)* Iris example adapted from: https://www.datacamp.com/community/tutorials/random-forests-classifier-python* Further modified by: Aaron Hill and Wei-Ting Hsiao (Dept. of Atmospheric Science, Colorado State University), January 2020* Further adapted by: Prof. Elizabeth Barnes for ATS 780A7 Spring 2022 at Colorado State University
###Code
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
print('IN_COLAB = ' + str(IN_COLAB))
import sys
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sklearn
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from sklearn.inspection import permutation_importance
from sklearn.tree import export_graphviz
import pydot
print(f"python version = {sys.version}")
print(f"numpy version = {np.__version__}")
print(f"scikit-learn version = {sklearn.__version__}")
# Make the path of your own Google Drive accessible to save a figure
if IN_COLAB:
try:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
local_path = '/content/drive/My Drive/Colab Notebooks/'
except:
local_path = './'
else:
local_path = '../figures/'
#Load dataset
iris = datasets.load_iris()
print('target/labels: ' + str(iris.target_names))
print(' features: ' + str(iris.feature_names))
# create DataFrame so it looks nice for visualizing the data
data=pd.DataFrame({
'sepal length':iris.data[:,0],
'sepal width':iris.data[:,1],
'petal length':iris.data[:,2],
'petal width':iris.data[:,3],
'species':iris.target
})
data.head()
# Split data into training and testing
X=data[['sepal length', 'sepal width', 'petal length', 'petal width']] # Features
y=data['species'] # Labels
# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) # 70% training and 30% test
# Create and Train the Random Forest Classifier
#-------------------------------------------------------------------------------------------------
# MODIFY: important tunable parameters for model
number_of_trees = 30 # number of trees to "average" together to create a random forest
tree_depth = 3 # maximum depth allowed for each tree
node_split = 2 # minimum number of training samples needed to split a node
leaf_samples = 1 # minimum number of training samples required to make a leaf node
criterion = 'gini' # 'gini' or 'entropy'
bootstrap = False # whether to perform "bagging=bootstrap aggregating" or not
max_samples = None # number of samples to grab when training each tree IF bootstrap=True, otherwise None
RAND_STATE = 17
#-------------------------------------------------------------------------------------------------
rf=RandomForestClassifier(n_estimators=number_of_trees,
criterion=criterion,
random_state=RAND_STATE,
min_samples_split = node_split,
min_samples_leaf = leaf_samples,
max_depth = tree_depth,
bootstrap = bootstrap,
max_samples = max_samples,
)
# train the model using the training sets
rf.fit(X_train,y_train)
# make predictions on the test set
y_test_pred=rf.predict(X_test)
acc = metrics.accuracy_score(y_test, y_test_pred) # compute the accuracy on the test set
print("testing accuracy: ", np.around(acc*100), '%')
from graphviz import Source
fig_savename = 'RF_iris_tree'
tree = rf[-1]
export_graphviz(tree,
out_file=local_path + '/' + fig_savename+'.dot',
filled=True,
proportion=False,
leaves_parallel=False,
class_names=iris.target_names,
feature_names=iris.feature_names)
Source.from_file(local_path + fig_savename+'.dot')
###Output
_____no_output_____
###Markdown
Feature Importance
###Code
def calc_importances(rf, feature_list):
# Get numerical feature importances
importances = list(rf.feature_importances_)
# List of tuples with variable and importance
feature_importances = [(feature, round(importance, 2)) for feature, importance in zip(feature_list, importances)]
# Sort the feature importances by most important first
feature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True)
# Print out the feature and importances
print('')
[print('Variable: {:20} Importance: {}'.format(*pair)) for pair in feature_importances]
print('')
return importances
def plot_feat_importances(importances, feature_list):
plt.figure()
# Set the style
plt.style.use('fivethirtyeight')
# list of x locations for plotting
x_values = list(range(len(importances)))
# Make a bar chart
plt.barh(x_values, importances)
# Tick labels for x axis
plt.yticks(x_values, feature_list)
# Axis labels and title
plt.xlabel('Importance'); plt.ylabel('Variable'); plt.title('Variable Importances')
plot_feat_importances(calc_importances(rf, iris.feature_names), iris.feature_names)
###Output
Variable: petal width (cm) Importance: 0.52
Variable: petal length (cm) Importance: 0.4
Variable: sepal length (cm) Importance: 0.07
Variable: sepal width (cm) Importance: 0.01
###Markdown
Permutation Importance
###Code
# Single-pass permutation
permute = permutation_importance(rf, X, y, n_repeats=20,
random_state=RAND_STATE)
# Sort the importances
sorted_idx = permute.importances_mean.argsort()
def plot_perm_importances(permute, sorted_idx, feature_list):
# Sort the feature list based on
new_feature_list = []
for index in sorted_idx:
new_feature_list.append(feature_list[index])
fig, ax = plt.subplots()
ax.boxplot(permute.importances[sorted_idx].T,
vert=False, labels=new_feature_list)
ax.set_title("Permutation Importances")
fig.tight_layout()
plot_perm_importances(permute, sorted_idx, iris.feature_names)
###Output
_____no_output_____ |
BOXER_Make_Bounding_Boxes_Around_Objects.ipynb | ###Markdown
BOXER - Real time automatic creation of bounding boxes and salience maps around detected objects for fixed backgrounds.**Example Results:** - Bounding box creation around a mobile phone  - Bounding box creation and salient object higlighting for a mobile phone  **Features:** - *Quick:* No training of the model is required as it only utilizes the feature maps output by the model. - *Real-Time:* (Check github for real time scripts with OpenCV). Only a single pass through the model is needed for both bounding box and salient map creation. Since the model is single-layered and lightweight, there is no percievable delay. - *Customizable:* Change the hyperparameter values for better results. - *Save your results:* Save the images with the bounding box and salient map, along with a text file for the annotaion dimensions. **STEP 0: Imports and hyperparameters**Run the following cell for all the necessary imports and to initialize the hyperparameters.
###Code
import imutils
import tensorflow.keras as k
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
from numpy import expand_dims
import matplotlib.pyplot as plt
import math
import numpy as np
import cv2
from PIL import Image as Img
image_size = (224,224)
kappa, kappa_s = 7, 0
shift = 100
color = (0, 0, 0)
thickness = 2
###Output
_____no_output_____
###Markdown
**STEP 1: Background Capture**Take a picture of the fixed background.Note: Salience map highlighting works well when the background is: - light colored - stable (no moving components like leaves outside a window) - uniform (low levels of darker regions are permissible)
###Code
# BACKGROUND PICTURE
from IPython.display import display, Javascript
from google.colab.output import eval_js
from base64 import b64decode
def take_photo(filename='background.jpg', quality=0.8):
js = Javascript('''
async function takePhoto(quality) {
const div = document.createElement('div');
const capture = document.createElement('button');
capture.textContent = 'Capture';
div.appendChild(capture);
const video = document.createElement('video');
video.style.display = 'block';
const stream = await navigator.mediaDevices.getUserMedia({video: true});
document.body.appendChild(div);
div.appendChild(video);
video.srcObject = stream;
await video.play();
google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);
await new Promise((resolve) => capture.onclick = resolve);
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video, 0, 0);
stream.getVideoTracks()[0].stop();
div.remove();
return canvas.toDataURL('image/jpeg', quality);
}
''')
display(js)
data = eval_js('takePhoto({})'.format(quality))
binary = b64decode(data.split(',')[1])
with open(filename, 'wb') as f:
f.write(binary)
return filename
from IPython.display import Image
try:
filename = take_photo()
print('Saved to {}'.format(filename))
display(Image(filename))
except Exception as err:
print(str(err))
###Output
_____no_output_____
###Markdown
**STEP 2: Model creation**Using tensorflow's keras module, create a single layered convolutional layer.Notes about model: - SeparableConv2D layer is used as it is quicker but with similar results. - A depth multiplier can be used to increase the number of output channels. - The filter dimension is 1 by 1 to generate a number feature maps, each with a single channel in depth.
###Code
model = k.models.Sequential([k.layers.SeparableConv2D(64, (1, 1), activation='relu', input_shape=(image_size[0], image_size[1], 3), depth_multiplier=3)])
output_layer = 0
outputs = [model.layers[output_layer].output]
box_model = Model(inputs=model.inputs, outputs=outputs)
box_model.summary()
###Output
Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
separable_conv2d_input (Inpu [(None, 224, 224, 3)] 0
_________________________________________________________________
separable_conv2d (SeparableC (None, 224, 224, 64) 649
=================================================================
Total params: 649
Trainable params: 649
Non-trainable params: 0
_________________________________________________________________
###Markdown
**STEP 3: Encode a feature map for the background image**Generate a number of feature maps (64 in this case) by passing the background image through the model. Average the feature maps to get a single feature map.
###Code
background = load_img('/content/background.jpg', target_size=(image_size))
background_img = img_to_array(background)
background_img = expand_dims(background_img, axis=0)
feature_maps = box_model.predict(background_img)
import matplotlib.pyplot as plt
fmap_back_avg = np.zeros(shape=(feature_maps.shape[1], feature_maps.shape[2]))
span = int(math.sqrt(feature_maps.shape[-1]))
for fmap in feature_maps:
i = 1
for _ in range(span):
for _ in range(span):
fmap_back_avg += fmap[:, :, i - 1].squeeze()
i += 1
fmap_back_avg /= (span ** 2)
plt.imshow(fmap_back_avg)
###Output
_____no_output_____
###Markdown
**STEP 4: Take the input image**
###Code
# INPUT PICTURE
from IPython.display import display, Javascript
from google.colab.output import eval_js
from base64 import b64decode
def take_photo(filename='input.jpg', quality=0.8):
js = Javascript('''
async function takePhoto(quality) {
const div = document.createElement('div');
const capture = document.createElement('button');
capture.textContent = 'Capture';
div.appendChild(capture);
const video = document.createElement('video');
video.style.display = 'block';
const stream = await navigator.mediaDevices.getUserMedia({video: true});
document.body.appendChild(div);
div.appendChild(video);
video.srcObject = stream;
await video.play();
google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);
await new Promise((resolve) => capture.onclick = resolve);
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video, 0, 0);
stream.getVideoTracks()[0].stop();
div.remove();
return canvas.toDataURL('image/jpeg', quality);
}
''')
display(js)
data = eval_js('takePhoto({})'.format(quality))
binary = b64decode(data.split(',')[1])
with open(filename, 'wb') as f:
f.write(binary)
return filename
from IPython.display import Image
try:
filename = take_photo()
print('Saved to {}'.format(filename))
display(Image(filename))
except Exception as err:
print(str(err))
###Output
_____no_output_____
###Markdown
**STEP 5: Encode and display the results**If results aren't too good, play around with the kappa and kappa_s values
###Code
kappa, kappa_s = 7, 0
inp = load_img('/content/input.jpg', target_size=(image_size))
inp = img_to_array(inp)
input_image = expand_dims(inp, axis=0)
feature_maps = box_model.predict(input_image)
fmap_avg = np.zeros(shape=(feature_maps.shape[1], feature_maps.shape[2]))
span = int(math.sqrt(feature_maps.shape[-1]))
for fmap in feature_maps:
i = 1
for _ in range(span):
for _ in range(span):
fmap_avg += fmap[:, :, i - 1].squeeze()
i += 1
fmap_avg /= (span ** 2)
diff = fmap_back_avg - fmap_avg
sal_diff = fmap_back_avg - fmap_avg
sal_diff[sal_diff <= kappa_s] = 0
sal_diff[sal_diff > kappa_s] = shift
diff[diff <= kappa] = 0
diff[diff > kappa] = shift
startx, endx, y = [], [], []
count = 0
for i in diff:
if max(i) != 0:
y.append(count)
lis = list(i)
startx.append(lis.index(shift))
endx.append(len(lis) - list(reversed(lis)).index(shift) - 1)
count += 1
startx = np.array(startx)
startx = (startx).astype('int')
endx = np.array(endx)
endx = (endx).astype('int')
y = np.array(y)
y = (y).astype('int')
start, end = (0, 0), (0, 0)
if not (len(startx) == 0 or len(endx) == 0 or len(y) == 0):
start = (min(startx), max(min(y), 0))
end = (max(endx), max(y))
inp[:, :, 2] = inp[:, :, 2] + sal_diff
cv2.rectangle(inp, start, end, color, thickness)
out = cv2.resize(inp, (500,500))
display(Img.fromarray(out.astype('uint8'), 'RGB'))
###Output
_____no_output_____
###Markdown
**OPTIONAL: Salience map stabilizer**
###Code
x = fmap_back_avg - fmap_avg
for i in range(223):
for j in range(223-(1+10+1)):
lis = np.array([x[i][k] for k in range(j,j+1)])
lis2 = np.array([x[i][k] for k in range(j+1,j+11)])
lis3 = np.array([x[i][k] for k in range(j+11,j+12)])
if all(lis<0) and list(lis2>0).count(True)<5 and all(lis3<0):
for l in range(j+1,j+11):
if x[i][l] > 0:
x[i][l] = 0-x[i][l]
for i in range(223):
for j in range(223-(1+10+1)):
lis = np.array([x[i][k] for k in range(j,j+1)])
lis2 = np.array([x[i][k] for k in range(j+1,j+11)])
lis3 = np.array([x[i][k] for k in range(j+11,j+12)])
if all(lis>0) and any(lis2<0) and all(lis3>0):
for l in range(j+1,j+11):
if x[i][l] < 0:
x[i][l] = 0-x[i][l]
x[x <= kappa_s] = 0
x[x > kappa_s] = shift
inp[:, :, 2] = inp[:, :, 2] - sal_diff + x
out = cv2.resize(inp, (500,500))
display(Img.fromarray(out.astype('uint8'), 'RGB'))
###Output
_____no_output_____
###Markdown
**Save your results**
###Code
cv2.imwrite('Image.jpg', out)
f = open('annot.txt', 'w+')
f.write(str(start)+str(end))
f.close()
###Output
_____no_output_____ |
Day 3 Assignment.ipynb | ###Markdown
Question 1You all are Pilots, you want to land a plane safely, so altitude required for landing a plane is1000ft, it it is less than tell pilot to land the plane, or it is more than that but less than 5000ft askthe pilot to “come down to 1000ft”, else if it more than 5000ft ask the pilot to “go around and trylater”
###Code
Altitude=int(input("Enter Altitude:- "))
if Altitude <= 1000:
print("Safe to land")
elif Altitude>=1000 and Altitude<=5000:
print("come down to 1000ft")
else:
print("go around and try later")
###Output
Enter Altitude:- 60000
go around and try later
###Markdown
Question 2Using for loop please print all the prime numbers between 1- 200 using FOR LOOP AND RANGE function.
###Code
for number in range(1,200):
if number > 1:
for i in range(2,number):
if (number%i)==0:
break
else:
print("The Prime Number is",number)
###Output
The Prime Number is 2
The Prime Number is 3
The Prime Number is 5
The Prime Number is 7
The Prime Number is 11
The Prime Number is 13
The Prime Number is 17
The Prime Number is 19
The Prime Number is 23
The Prime Number is 29
The Prime Number is 31
The Prime Number is 37
The Prime Number is 41
The Prime Number is 43
The Prime Number is 47
The Prime Number is 53
The Prime Number is 59
The Prime Number is 61
The Prime Number is 67
The Prime Number is 71
The Prime Number is 73
The Prime Number is 79
The Prime Number is 83
The Prime Number is 89
The Prime Number is 97
The Prime Number is 101
The Prime Number is 103
The Prime Number is 107
The Prime Number is 109
The Prime Number is 113
The Prime Number is 127
The Prime Number is 131
The Prime Number is 137
The Prime Number is 139
The Prime Number is 149
The Prime Number is 151
The Prime Number is 157
The Prime Number is 163
The Prime Number is 167
The Prime Number is 173
The Prime Number is 179
The Prime Number is 181
The Prime Number is 191
The Prime Number is 193
The Prime Number is 197
The Prime Number is 199
###Markdown
Question 1You all are Pilots, you want to land a plane safely, so altitude required for landing a plane is1000ft, it it is less than tell pilot to land the plane, or it is more than that but less than 5000ft askthe pilot to “come down to 1000ft”, else if it more than 5000ft ask the pilot to “go around and trylater”
###Code
altitude = input("Enter Altitude:- ")
altitude = int(altitude)
if altitude <= 1000:
print("Plane is safe to land")
elif altitude >1000 and altitude<5000:
print("Bring down to 1000ft")
else:
print("Turn around and try later")
###Output
Enter Altitude:- 1200
Bring down to 1000ft
###Markdown
Question 2Using for loop please print all the prime numbers between 1- 200 using FOR LOOP AND RANGEfunction.
###Code
for i in range (1, 201):
count = 0
for j in range(2, (i//2 + 1)):
if(i % j == 0):
count = count + 1
if (count == 0 and i != 1):
print(i, end = ' ')
###Output
2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199
###Markdown
Day 3 Assignment 1
###Code
Alt = int(input("Please enter the altitude in ft: "))
if Alt <= 1000:
print("Safe to land")
elif Alt >1000 | Alt <= 5000:
print("Bring down to 1000ft")
else:
print("GO AROUND and TRY LATER")
###Output
Please enter the altitude in ft: 955
Safe to land
###Markdown
Day 3 Assignment 2
###Code
lower = 1
upper = 200
print("Prime numbers between", lower, "and", upper, "are:")
for num in range(lower, upper + 1):
if num > 1:
for i in range(2, num):
if (num % i) == 0:
break
else:
print(num)
###Output
Prime numbers between 1 and 200 are:
2
3
5
7
11
13
17
19
23
29
31
37
41
43
47
53
59
61
67
71
73
79
83
89
97
101
103
107
109
113
127
131
137
139
149
151
157
163
167
173
179
181
191
193
197
199
###Markdown
Question 1
###Code
a=input("Enter current altitude")
b=int(a)
if b<=1000:
print("safe to land")
elif b<4500:
print("Bring down to 1000")
else:
print("Turn Around")
a=input("Enter current altitude: ")
b=int(a)
if b<=1000:
print("safe to land")
elif b<4500:
print("Bring down to 1000")
else:
print("Turn Around")
a=input("Enter current altitude:")
b=int(a)
if b<=1000:
print("safe to land")
elif b<4500:
print("Bring down to 1000")
else:
print("Turn Around")
###Output
Enter current altitude:6500
Turn Around
###Markdown
Question 2prime number between 1 to 200.
###Code
min=int(input("Enter the starting number"))
max=int(input("Enter the last number"))
for num in range(min,max+1):
if num>1:
for i in range(2,num):
if(num%i)==0:
break
else:
print(num)
###Output
Enter the starting number1
Enter the last number200
2
3
5
7
11
13
17
19
23
29
31
37
41
43
47
53
59
61
67
71
73
79
83
89
97
101
103
107
109
113
127
131
137
139
149
151
157
163
167
173
179
181
191
193
197
199
###Markdown
Question1
###Code
a=1000
b=int(input())
if(b<=a):
print("safe to land")
elif(b<5000|a>b):
print("Bring down to 1000")
else:
print("Turn Around")
###Output
1000
safe to land
###Markdown
Question 2
###Code
lower =1
upper =200
print("Prime numbers between", lower,"and",upper,"are:")
for num in range(lower,upper+ 1):
if num>1:
for i in range(2, num):
if(num%i)==0:
break
else:
print(num)
###Output
Prime numbers between 1 and 200 are:
2
3
5
7
11
13
17
19
23
29
31
37
41
43
47
53
59
61
67
71
73
79
83
89
97
101
103
107
109
113
127
131
137
139
149
151
157
163
167
173
179
181
191
193
197
199
###Markdown
1. Problem Statement Write a program to subtract two complex numbers in Python
###Code
num1 = complex(input("Enter 1st Complex Number: "))
num2 = complex(input("Enter 2nd Complex Number: "))
num3 = num1-num2
print("Difference of two complex numbers:", num3)
###Output
Enter 1st Complex Number: 1+2j
Enter 2nd Complex Number: 2+3j
Difference of two complex numbers: (-1-1j)
###Markdown
2. Problem Statement Write a program to find the fourth root of a number
###Code
num1 = int(input("Enter the number for fourth root:"))
print("Fourth root of",num1,"is :",num1**(1/4))
###Output
Enter the number for fourth root:12
Fourth root of 12 is : 1.8612097182041991
###Markdown
3. Problem Statement Write a program to swap two numbers in Python with the help of a temporary variable
###Code
num1 = int(input("Enter the First Number:"))
num2 = int(input("Enter the Second Number:"))
print("\nBefore Swapping \nFirst Number = ",num1,"\nSecond Number = ",num2)
temp = num1
num1 = num2
num2 = temp
print("\nAfter Swapping \nFirst Number = ",num1,"\nSecond Number = ",num2)
###Output
Enter the First Number:1
Enter the Second Number:2
Before Swapping
First Number = 1
Second Number = 2
After Swapping
First Number = 2
Second Number = 1
###Markdown
4. Problem Statement Write a program to swap two numbers in Python without using a temporary variable
###Code
num1 = int(input("Enter the First Number:"))
num2 = int(input("Enter the Second Number:"))
print("\nBefore Swapping \nFirst Number = ",num1,"\nSecond Number = ",num2)
num1 = num1^num2
num2 = num1^num2
num1 = num1^num2
print("\nAfter Swapping \nFirst Number = ",num1,"\nSecond Number = ",num2)
###Output
Enter the First Number:1
Enter the Second Number:2
Before Swapping
First Number = 1
Second Number = 2
After Swapping
First Number = 2
Second Number = 1
###Markdown
5. Problem Statement Write a program to convert fahrenheit to kelvin and celsius both
###Code
Fahrenheit = float(input("Enter Fahrenheit:"))
Celsius = (Fahrenheit - 32) * 5.0/9.0
print("Celsius value",Celsius)
kelvin = Celsius + 273.15
print("Kelvin value:",kelvin)
###Output
Enter Fahrenheit:98.6
Celsius value 37.0
Kelvin value: 310.15
###Markdown
6. Problem Statement Write a program to demonstrate all the available datatypes in Python. Hint: Use type() function.
###Code
Integer = 24
Float = 45.2
Complex = 1_3j
String = "Hello World"
List = [1,2,2.9,3.6,4,5,6]
Tuples =(1,2,3.5,4,5)
Set = {1,1.9,2,3,2.5}
Dictionary = {1:'Lets',2:'Upgrade'}
print(Integer,'is',type(Integer))
print(Float,'is',type(Float))
print(Complex,'is',type(Complex))
print(String,'is',type(String))
print(List,'is',type(List))
print(Tuples,'is',type(Tuples))
print(Set,'is',type(Set))
print(Dictionary,'is',type(Dictionary))
###Output
24 is <class 'int'>
45.2 is <class 'float'>
13j is <class 'complex'>
Hello World is <class 'str'>
[1, 2, 2.9, 3.6, 4, 5, 6] is <class 'list'>
(1, 2, 3.5, 4, 5) is <class 'tuple'>
{1, 2.5, 2, 3, 1.9} is <class 'set'>
{1: 'Lets', 2: 'Upgrade'} is <class 'dict'>
###Markdown
Question 1:
###Code
altitude = input("Enter the altitude:")
altitude=int(altitude)
if altitude<=1000:
print("Safe to Land")
elif altitude>1000 and altitude<=5000:
print("Bring down to 1000")
else:
print("Turn around and try later")
###Output
Enter the altitude:900
Safe to Land
###Markdown
Question 2:
###Code
for num in range(1,201):
if num>1:
for i in range(2,num):
if num%i==0:
break
else:
print(num)
###Output
2
3
5
7
11
13
17
19
23
29
31
37
41
43
47
53
59
61
67
71
73
79
83
89
97
101
103
107
109
113
127
131
137
139
149
151
157
163
167
173
179
181
191
193
197
199
###Markdown
Question 1You all are Pilots, you want to land a plane safely, so altitude required for landing a plane is 1000ft, it it is less than tell pilot to land the plane, or it is more than that but less than 5000ft ask the pilot to “come down to 1000ft”, else if it more than 5000ft ask the pilot to “go around and try later”
###Code
userInput = int(input("Enter the altitude of the Plane to land safely:"))
if userInput <= 1000:
print("Safe to Land")
elif userInput >1000 and userInput<=5000:
print("Bring down to 1000")
else:
print("Turn Around")
###Output
Enter the altitude of the Plane to land safely:5001
Turn Around
###Markdown
Question 2Using for loop please print all the prime numbers between 1- 200 using FOR LOOP AND RANGEfunction.
###Code
for number in range(1,201):
if number > 1:
for i in range(2,number):
if number%i == 0:
break
else:
print("Prime Number:%d"%number)
###Output
Prime Number:2
Prime Number:3
Prime Number:5
Prime Number:7
Prime Number:11
Prime Number:13
Prime Number:17
Prime Number:19
Prime Number:23
Prime Number:29
Prime Number:31
Prime Number:37
Prime Number:41
Prime Number:43
Prime Number:47
Prime Number:53
Prime Number:59
Prime Number:61
Prime Number:67
Prime Number:71
Prime Number:73
Prime Number:79
Prime Number:83
Prime Number:89
Prime Number:97
Prime Number:101
Prime Number:103
Prime Number:107
Prime Number:109
Prime Number:113
Prime Number:127
Prime Number:131
Prime Number:137
Prime Number:139
Prime Number:149
Prime Number:151
Prime Number:157
Prime Number:163
Prime Number:167
Prime Number:173
Prime Number:179
Prime Number:181
Prime Number:191
Prime Number:193
Prime Number:197
Prime Number:199
###Markdown
You all are Pilots, you want to land a plane safely, so altitude required for landing a plane is1000ft, it it is less than tell pilot to land the plane, or it is more than that but less than 5000ft askthe pilot to “come down to 1000ft”, else if it more than 5000ft ask the pilot to “go around and trylater”
###Code
alt=int(input("Enter altitude:"))
if alt<=1000:
print("Safe to land")
elif alt>1000 and alt<5000:
print(" Bring down to 1000")
else:
print(" Turn Around")
###Output
Enter altitude:4000
Bring down to 1000
###Markdown
Using for loop please print all the prime numbers between 1- 200 using FOR LOOP AND RANGEfunction.
###Code
for i in range(1,201):
for j in range(2,i):
if i%j==0:
break
else:
print(i)
###Output
1
2
3
5
7
11
13
17
19
23
29
31
37
41
43
47
53
59
61
67
71
73
79
83
89
97
101
103
107
109
113
127
131
137
139
149
151
157
163
167
173
179
181
191
193
197
199
|
examples/simulations/percolation/A_ordinary_percolation.ipynb | ###Markdown
Ordinary Percolation OpenPNM contains several percolation algorithms which are central to the multiphase models employed by pore networks. The essential idea is to identify pathways for fluid flow through the network using the entry capillary pressure as a threshold for passage between connected pores. The capillary pressure can either be associated to the pores themselves known as ``site percolation`` or the connecting throats known as ``bond percolation`` or a mixture of both. OpenPNM provides several models for calculating the entry pressure for a given pore or throat and it generally depends on the size of the pore or throat and the wettability to a particular phase characterised by the contact angle. If a pathway through the network connects pores into clusters that contain both an inlet and an outlet then it is deemed to be ``percolating``. In this example we will demonstrate ``Ordinary Percolation`` which is the fastest and simplest algorithm to run. The number of steps involved in the algorithm is equal to the number of points that are specified in the run method. This can either be an integer, in which case the minimum and maximum capillary entry pressures in the network are used as limits and the integer value is used to create that number of intervals between the limits, or an array of specified pressured can be supplied. The algorithm progresses incrementally from low pressure to high. At each step, clusters of connected pores are found with entry pressures below the current threshold and those that are not already invaded and connected to an inlet are set to be invaded at this pressure. Therefore the process is quasistatic and represents the steady state saturation that would be achieved if the inlet pressure were to be held at that threshold. First do our imports
###Code
import openpnm as op
%config InlineBackend.figure_formats = ['svg']
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(10)
from ipywidgets import interact, IntSlider
%matplotlib inline
mpl.rcParams["image.interpolation"] = "None"
ws = op.Workspace()
ws.settings["loglevel"] = 40
###Output
_____no_output_____
###Markdown
Create a 2D Cubic network with standard PSD and define the phase as Water and use Standard physics which implements the washburn capillary pressure relation for throat entry pressure.
###Code
N = 100
net = op.network.Cubic(shape=[N, N, 1], spacing=2.5e-5)
geom = op.geometry.SpheresAndCylinders(network=net, pores=net.Ps, throats=net.Ts)
water = op.phases.Water(network=net)
phys = op.physics.Standard(network=net, phase=water, geometry=geom)
###Output
_____no_output_____
###Markdown
We can check the model by looking at the model dict on the phys object
###Code
phys.models['throat.entry_pressure']
###Output
_____no_output_____
###Markdown
Now set up and run the algorithm choosing the left and right sides of the network for inlets and outlets respectively. Because we did not set up the network with boundary pores with zero volume a little warning is given because the starting saturation for the algorithm is not zero. However, this is fine and because the network is quite large the starting saturation is actually quite close to zero.
###Code
alg = op.algorithms.OrdinaryPercolation(network=net, phase=water)
alg.settings._update({'pore_volume': 'pore.volume',
'throat_volume': 'throat.volume'})
alg.set_inlets(pores=net.pores('left'))
alg.set_outlets(pores=net.pores('right'))
alg.run(points=1000)
alg.plot_intrusion_curve()
plt.show()
###Output
_____no_output_____
###Markdown
The algorithm completes very quickly and the invading phase saturation can be plotted versus the applied boundary pressure.
###Code
data = alg.get_intrusion_data()
mask = np.logical_and(np.asarray(data.Snwp) > 0.0 , np.asarray(data.Snwp) < 1.0)
mask = np.argwhere(mask).flatten()
pressures = np.asarray(data.Pcap)[mask]
###Output
_____no_output_____
###Markdown
As the network is 2D and cubic we can easily plot the invading phase configuration at the different invasion steps
###Code
def plot_saturation(step):
arg = mask[step]
Pc = np.ceil(data.Pcap[arg])
sat = np.around(data.Snwp[arg], 3)
is_perc = alg.is_percolating(Pc)
pmask = alg['pore.invasion_pressure'] <= Pc
im = pmask.reshape([N, N])
fig, ax = plt.subplots(figsize=[5, 5])
ax.imshow(im, cmap='Blues');
title = f"Capillary pressure: {Pc:.0f}, saturation: {sat:.2f}, percolating: {is_perc}"
ax.set_title(title)
plt.show()
perc_thresh = alg.get_percolation_threshold()
thresh_step = np.argwhere(np.asarray(pressures) == perc_thresh)
slider = IntSlider(min=0, max=len(mask)-1, step=1, value=thresh_step)
interact(plot_saturation, step=slider);
###Output
_____no_output_____ |
notebooks/T07G_Gradient_Descent_Optimization.ipynb | ###Markdown
Smoothing with exponentially weighted averages
###Code
n = 50
x = np.arange(n) * np.pi
y = np.cos(x) * np.exp(x/100) - 10*np.exp(-0.01*x)
###Output
_____no_output_____
###Markdown
Exponentially weighted averageThe exponentially weighted average adds a fraction $\beta$ of the current value to a leaky running sum of past values. Effectively, the contribution from the $t-n$th value is scaled by$$\beta^n(1 - \beta)$$For example, here are the contributions to the current value after 5 iterations (iteration 5 is the current iteration)| iteration | contribution || --- | --- || 1 | $\beta^4(1 - \beta)$ || 2 | $\beta^3(1 - \beta)$ || 3 | $\beta^2(1 - \beta)$ || 4 | $\beta^1(1 - \beta)$ || 5 | $(1 - \beta)$ |Since $\beta \lt 1$, the contribution decreases exponentially with the passage of time. Effectively, this acts as a smoother for a function.
###Code
def ewa(y, beta):
"""Exponentially weighted average."""
n = len(y)
zs = np.zeros(n)
z = 0
for i in range(n):
z = beta*z + (1 - beta)*y[i]
zs[i] = z
return zs
###Output
_____no_output_____
###Markdown
Exponentially weighted average with bias correctionSince the EWA starts from 0, there is an initial bias. This can be corrected by scaling with $$\frac{1}{1 - \beta^t}$$where $t$ is the iteration number.
###Code
def ewabc(y, beta):
"""Exponentially weighted average with bias correction."""
n = len(y)
zs = np.zeros(n)
z = 0
for i in range(n):
z = beta*z + (1 - beta)*y[i]
zc = z/(1 - beta**(i+1))
zs[i] = zc
return zs
beta = 0.9
plt.plot(x, y, 'o-')
plt.plot(x, ewa(y, beta), c='red', label='EWA')
plt.plot(x, ewabc(y, beta), c='orange', label='EWA with bias correction')
plt.legend()
pass
###Output
_____no_output_____
###Markdown
Momentum in 1DMomentum comes from physics, where the contribution of the gradient is to the velocity, not the position. Hence we create an accessory variable $v$ and increment it with the gradient. The position is then updated with the velocity in place of the gradient. The analogy is that we can think of the parameter $x$ as a particle in an energy well with potential energy $U = mgh$ where $h$ is given by our objective function $f$. The force generated is a function of the rat of change of potential energy $F \propto \nabla U \propto \nabla f$, and we use $F = ma$ to get that the acceleration $a \propto \nabla f$. Finally, we integrate $a$ over time to get the velocity $v$ and integrate $v$ to get the displacement $x$. Note that we need to damp the velocity otherwise the particle would just oscillate forever.We use a version of the update that simply treats the velocity as an exponentially weighted average popularized by Andrew Ng in his Coursera course. This is the same as the momentum scheme motivated by physics with some rescaling of constants.
###Code
def f(x):
return x**2
def grad(x):
return 2*x
def gd(x, grad, alpha, max_iter=10):
xs = np.zeros(1 + max_iter)
xs[0] = x
for i in range(max_iter):
x = x - alpha * grad(x)
xs[i+1] = x
return xs
def gd_momentum(x, grad, alpha, beta=0.9, max_iter=10):
xs = np.zeros(1 + max_iter)
xs[0] = x
v = 0
for i in range(max_iter):
v = beta*v + (1-beta)*grad(x)
vc = v/(1+beta**(i+1))
x = x - alpha * vc
xs[i+1] = x
return xs
###Output
_____no_output_____
###Markdown
Gradient descent with moderate step size
###Code
alpha = 0.1
x0 = 1
xs = gd(x0, grad, alpha)
xp = np.linspace(-1.2, 1.2, 100)
plt.plot(xp, f(xp))
plt.plot(xs, f(xs), 'o-', c='red')
for i, (x, y) in enumerate(zip(xs, f(xs)), 1):
plt.text(x, y+0.2, i,
bbox=dict(facecolor='yellow', alpha=0.5), fontsize=14)
pass
###Output
_____no_output_____
###Markdown
Gradient descent with large step sizeWhen the step size is too large, gradient descent can oscillate and even diverge.
###Code
alpha = 0.95
xs = gd(1, grad, alpha)
xp = np.linspace(-1.2, 1.2, 100)
plt.plot(xp, f(xp))
plt.plot(xs, f(xs), 'o-', c='red')
for i, (x, y) in enumerate(zip(xs, f(xs)), 1):
plt.text(x*1.2, y, i,
bbox=dict(facecolor='yellow', alpha=0.5), fontsize=14)
pass
###Output
_____no_output_____
###Markdown
Gradient descent with momentumMomentum results in cancellation of gradient changes in opposite directions, and hence damps out oscillations while amplifying consistent changes in the same direction. This is perhaps clearer in the 2D example below.
###Code
alpha = 0.95
xs = gd_momentum(1, grad, alpha, beta=0.9)
xp = np.linspace(-1.2, 1.2, 100)
plt.plot(xp, f(xp))
plt.plot(xs, f(xs), 'o-', c='red')
for i, (x, y) in enumerate(zip(xs, f(xs)), 1):
plt.text(x, y+0.2, i,
bbox=dict(facecolor='yellow', alpha=0.5), fontsize=14)
pass
###Output
_____no_output_____
###Markdown
Momentum and RMSprop in 2D
###Code
def f2(x):
return x[0]**2 + 100*x[1]**2
def grad2(x):
return np.array([2*x[0], 200*x[1]])
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
pass
def gd2(x, grad, alpha, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0,:] = x
for i in range(max_iter):
x = x - alpha * grad(x)
xs[i+1,:] = x
return xs
def gd2_momentum(x, grad, alpha, beta=0.9, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0, :] = x
v = 0
for i in range(max_iter):
v = beta*v + (1-beta)*grad(x)
vc = v/(1+beta**(i+1))
x = x - alpha * vc
xs[i+1, :] = x
return xs
###Output
_____no_output_____
###Markdown
Gradient descent with large step sizeWe get severe oscillations.
###Code
alpha = 0.01
x0 = np.array([-1,-1])
xs = gd2(x0, grad2, alpha, max_iter=75)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Vanilla gradient descent')
pass
###Output
_____no_output_____
###Markdown
Gradient descent with momentumThe damping effect is clear.
###Code
alpha = 0.01
x0 = np.array([-1,-1])
xs = gd2_momentum(x0, grad2, alpha, beta=0.9, max_iter=75)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Gradieent descent with momentum')
pass
###Output
_____no_output_____
###Markdown
Gradient descent with RMSpropRMSprop scales the learning rate in each direction by the square root of the exponentially weighted sum of squared gradients. Near a saddle or any plateau, there are directions where the gradient is very small - RMSporp encourages larger steps in those directions, allowing faster escape.
###Code
def gd2_rmsprop(x, grad, alpha, beta=0.9, eps=1e-8, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0, :] = x
v = 0
for i in range(max_iter):
v = beta*v + (1-beta)*grad(x)**2
x = x - alpha * grad(x) / (eps + np.sqrt(v))
xs[i+1, :] = x
return xs
alpha = 0.1
x0 = np.array([-1,-1])
xs = gd2_rmsprop(x0, grad2, alpha, beta=0.9, max_iter=10)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Gradient descent with RMSprop')
pass
###Output
_____no_output_____
###Markdown
ADAMADAM (Adaptive Moment Estimation) combines the ideas of momentum, RMSprop and bias correction. It is probably the most popular gradient descent method in current deep learning practice.
###Code
def gd2_adam(x, grad, alpha, beta1=0.9, beta2=0.999, eps=1e-8, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0, :] = x
m = 0
v = 0
for i in range(max_iter):
m = beta1*m + (1-beta1)*grad(x)
v = beta2*v + (1-beta2)*grad(x)**2
mc = m/(1+beta1**(i+1))
vc = v/(1+beta2**(i+1))
x = x - alpha * m / (eps + np.sqrt(vc))
xs[i+1, :] = x
return xs
alpha = 0.1
x0 = np.array([-1,-1])
xs = gd2_adam(x0, grad2, alpha, beta1=0.9, beta2=0.9, max_iter=10)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Gradient descent with RMSprop')
pass
###Output
_____no_output_____
###Markdown
Implementing a custom optimization routine for `scipy.optimize`Gradient descent is not one of the methods available in `scipy.optimize`. However we can implement our own version by following the API of the `minimize` function.
###Code
import scipy.optimize as opt
import scipy.linalg as la
def custmin(fun, x0, args=(), maxfev=None, alpha=0.0002,
maxiter=100000, tol=1e-10, callback=None, **options):
"""Implements simple gradient descent for the Rosen function."""
bestx = x0
bestf = fun(x0)
funcalls = 1
niter = 0
improved = True
stop = False
while improved and not stop and niter < maxiter:
niter += 1
# the next 2 lines are gradient descent
step = alpha * rosen_der(bestx)
bestx = bestx - step
bestf = fun(bestx)
funcalls += 1
if la.norm(step) < tol:
improved = False
if callback is not None:
callback(bestx)
if maxfev is not None and funcalls >= maxfev:
stop = True
break
return opt.OptimizeResult(fun=bestf, x=bestx, nit=niter,
nfev=funcalls, success=(niter > 1))
def reporter(p):
"""Reporter function to capture intermediate states of optimization."""
global ps
ps.append(p)
###Output
_____no_output_____
###Markdown
Test on Rosenbrock banana function We will use the [Rosenbrock "banana" function](http://en.wikipedia.org/wiki/Rosenbrock_function) to illustrate unconstrained multivariate optimization. In 2D, this is$$f(x, y) = b(y - x^2)^2 + (a - x)^2$$The function has a global minimum at (1,1) and the standard expression takes $a = 1$ and $b = 100$. Conditioning of optimization problemWith these values for $a$ and $b$, the problem is ill-conditioned. As we shall see, one of the factors affecting the ease of optimization is the condition number of the curvature (Hessian). When the condition number is high, the gradient may not point in the direction of the minimum, and simple gradient descent methods may be inefficient since they may be forced to take many sharp turns. For the 2D version, we have$$f(x) = 100(y - x^2)^2 + (1 - x)^2$$and can calculate the Hessian to be $$\begin{bmatrix}802 & -400 \\-400 & 200\end{bmatrix}$$
###Code
H = np.array([
[802, -400],
[-400, 200]
])
np.linalg.cond(H)
U, s, Vt = np.linalg.svd(H)
s[0]/s[1]
###Output
_____no_output_____
###Markdown
Function to minimize
###Code
def rosen(x):
"""Generalized n-dimensional version of the Rosenbrock function"""
return sum(100*(x[1:]-x[:-1]**2.0)**2.0 +(1-x[:-1])**2.0)
def rosen_der(x):
"""Derivative of generalized Rosen function."""
xm = x[1:-1]
xm_m1 = x[:-2]
xm_p1 = x[2:]
der = np.zeros_like(x)
der[1:-1] = 200*(xm-xm_m1**2) - 400*(xm_p1 - xm**2)*xm - 2*(1-xm)
der[0] = -400*x[0]*(x[1]-x[0]**2) - 2*(1-x[0])
der[-1] = 200*(x[-1]-x[-2]**2)
return der
###Output
_____no_output_____
###Markdown
Why is the condition number so large?
###Code
x = np.linspace(-5, 5, 100)
y = np.linspace(-5, 5, 100)
X, Y = np.meshgrid(x, y)
Z = rosen(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))
# Note: the global minimum is at (1,1) in a tiny contour island
plt.contour(X, Y, Z, np.arange(10)**5, cmap='jet')
plt.text(1, 1, 'x', va='center', ha='center', color='red', fontsize=20)
pass
###Output
_____no_output_____
###Markdown
Zooming in to the global minimum at (1,1)
###Code
x = np.linspace(0, 2, 100)
y = np.linspace(0, 2, 100)
X, Y = np.meshgrid(x, y)
Z = rosen(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))
plt.contour(X, Y, Z, [rosen(np.array([k, k])) for k in np.linspace(1, 1.5, 10)], cmap='jet')
plt.text(1, 1, 'x', va='center', ha='center', color='red', fontsize=20)
pass
###Output
_____no_output_____
###Markdown
We will use our custom gradient descent to minimize the banana function Helpful Hint One of the most common causes of failure of optimization is because the gradient or Hessian function is specified incorrectly. You can check for this using `check_grad` which compares the analytical gradient with one calculated using finite differences.
###Code
from scipy.optimize import check_grad
for x in np.random.uniform(-2,2,(10,2)):
print(x, check_grad(rosen, rosen_der, x))
# Initial starting position
x0 = np.array([4,-4.1])
ps = [x0]
opt.minimize(rosen, x0, method=custmin, callback=reporter)
x = np.linspace(-5, 5, 100)
y = np.linspace(-5, 5, 100)
X, Y = np.meshgrid(x, y)
Z = rosen(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))
ps = np.array(ps)
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.contour(X, Y, Z, np.arange(10)**5, cmap='jet')
plt.plot(ps[:, 0], ps[:, 1], '-ro')
plt.subplot(122)
plt.semilogy(range(len(ps)), rosen(ps.T))
pass
###Output
_____no_output_____
###Markdown
Comparison with standard algorithmsNote that all these methods take far fewer function iterations and function evaluations to find the minimum compared with vanilla gradient descent.Many of these are based on estimating the Newton direction. Recall Newton's method for finding roots of a univariate function$$x_{K+1} = x_k - \frac{f(x_k)}{f'(x_k)}$$When we are looking for a minimum, we are looking for the roots of the *derivative* $f'(x)$, so$$x_{K+1} = x_k - \frac{f'(x_k}{f''(x_k)}$$Newton's method can also be seen as a Taylor series approximation$$f(x+h) = f(x) + h f'(x) + \frac{h^2}{2}f''(x)$$At the function minimum, the derivative is 0, so\begin{align}\frac{f(x+h) - f(x)}{h} &= f'(x) + \frac{h}{2}f''(x) \\0 &= f'(x) + \frac{h}{2}f''(x) \end{align}and letting $\Delta x = \frac{h}{2}$, we get that the Newton step is$$\Delta x = - \frac{f'(x)}{f''(x)}$$The multivariate analog replaces $f'$ with the Jacobian and $f''$ with the Hessian, so the Newton step is$$\Delta x = -H^{-1}(x) \nabla f(x)$$Slightly more rigorously, we can optimize the quadratic multivariate Taylor expansion $$f(x + p) = f(x) + p^T\nabla f(x) + \frac{1}{2}p^TH(x)p$$Differentiating with respect to the direction vector $p$ and setting to zero, we get$$H(x)p = -\nabla f(x)$$giving$$p = -H(x)^{-1}\nabla f(x)$$
###Code
from scipy.optimize import rosen, rosen_der, rosen_hess
###Output
_____no_output_____
###Markdown
Nelder-MeadThere are some optimization algorithms not based on the Newton method, but on other heuristic search strategies that do not require any derivatives, only function evaluations. One well-known example is the Nelder-Mead simplex algorithm.
###Code
ps = [x0]
opt.minimize(rosen, x0, method='nelder-mead', callback=reporter)
ps = np.array(ps)
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.contour(X, Y, Z, np.arange(10)**5, cmap='jet')
plt.plot(ps[:, 0], ps[:, 1], '-ro')
plt.subplot(122)
plt.semilogy(range(len(ps)), rosen(ps.T));
###Output
_____no_output_____
###Markdown
BFGSAs calculating the Hessian is computationally expensive, sometimes first order methods that only use the first derivatives are preferred. Quasi-Newton methods use functions of the first derivatives to approximate the inverse Hessian. A well know example of the Quasi-Newoton class of algorithjms is BFGS, named after the initials of the creators. As usual, the first derivatives can either be provided via the `jac=` argument or approximated by finite difference methods.
###Code
ps = [x0]
opt.minimize(rosen, x0, method='Newton-CG', jac=rosen_der, hess=rosen_hess, callback=reporter)
ps = np.array(ps)
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.contour(X, Y, Z, np.arange(10)**5, cmap='jet')
plt.plot(ps[:, 0], ps[:, 1], '-ro')
plt.subplot(122)
plt.semilogy(range(len(ps)), rosen(ps.T))
pass
###Output
_____no_output_____
###Markdown
Newton-CGSecond order methods solve for $H^{-1}$ and so require calculation of the Hessian (either provided or approximated using finite differences). For efficiency reasons, the Hessian is not directly inverted, but solved for using a variety of methods such as conjugate gradient. An example of a second order method in the `optimize` package is `Newton-GC`.
###Code
ps = [x0]
opt.minimize(rosen, x0, method='Newton-CG', jac=rosen_der, hess=rosen_hess, callback=reporter)
ps = np.array(ps)
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.contour(X, Y, Z, np.arange(10)**5, cmap='jet')
plt.plot(ps[:, 0], ps[:, 1], '-ro')
plt.subplot(122)
plt.semilogy(range(len(ps)), rosen(ps.T))
pass
###Output
_____no_output_____
###Markdown
Gradient Descent OptimizationsMini-batch and stochastic gradient descent is widely used in deep learning, where the large number of parameters and limited memory make the use of more sophisticated optimization methods impractical. Many methods have been proposed to accelerate gradient descent in this context, and here we sketch the ideas behind some of the most popular algorithms.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____ |
_notebooks/2021-01-18-mil_sac_data_prep.ipynb | ###Markdown
"Data Cleaning & Pipelines"> "It's not the sexiest part of data science but it is probably the most important"- toc: true- branch: master- badges: true- comments: true- categories: [data cleaning, data preparation, pandas.pipe, NBA]- image: https://media.giphy.com/media/Qvpxb0bju1rEp9Nipy/giphy.gif The Jist: Data Cleaning is critical before developing a modelThe data exploration post showed how to use knowledge about a dataset to interpret information. Since we know how the 2017-2019 seasons went for the Milwuakee Bucks and Sacramento Kings we can now plan out our machine learning problem. The machine learning model will attempt to predict the outcome of an NBA game before it actually occurs. We can start with using a logistic regression model to get a probabilistic output but we can look into other classification models after we give this one a go. This article outlines the most imperative portion of a machine learning project, outlining the problem and preparing the data. Part 1: Data Exploration This post is a continuation of the data exploration post where we explored the 2017-2019 seasons for the Milwuakee Bucks and the Sacramento Kings. Feel free to hop out and pop back in if you want to see the data described and explored: [Part 1 post: Data Exploration with NBA Data](https://dpendleton22.github.io/valuebyerror/data%20exploration/box%20plots/histograms/nba/2020/01/14/nba-analysis-post.html) 
###Code
#hide
import os
from pathlib import Path
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Set all the necessary paths for the data The data was provided by https://www.basketball-reference.com/. They are a great source for anyone interested in sports analytics as an initial introduction. I can go into details later within the project to note the importance of detail in sports data.Using the pathlib library from pandas it's straightforward getting all the data file names set. Setting a base folder name is a good method to simply call each dataset path by their name. Another method to get each dataset path would be to use the glob library to search the dataset folder for files with csv extensions
###Code
#collapse-hide
DATA_FOLDER = Path(os.getcwd(), 'mil_sac_data')
sac_2017_szn = Path(DATA_FOLDER, 'sac_2017_2018_szn.csv')
sac_2018_szn = Path(DATA_FOLDER, 'sac_2018_2019_szn.csv')
mil_2017_szn = Path(DATA_FOLDER, 'mil_2017_2018_szn.csv')
mil_2018_szn = Path(DATA_FOLDER, 'mil_2018_2019_szn.csv')
###Output
_____no_output_____
###Markdown
Let's review one of the datasets to determine how they all need to be cleaned
###Code
sac_2017_df = pd.read_csv(sac_2017_szn, header=[0,1])
###Output
_____no_output_____
###Markdown
Hold up, why are you setting the header argument?Most times than not, calling pd.read_csv("filename") with no additional arguments would read in a dataframe as expected. In this instance, BasketballReference provides two headers in their csv so we need to let pandas know in order to process the dataset. Pandas read_csv() function has over 20 arguments that can be set depending on how the data is parsed and organized in the original file. So if your data is a little funky, the function may still be able to handle it.Pandas read_csv() documentation: [pandas.read_csv()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html)
###Code
#collapse-show
sac_2017_df.iloc[0:5, 0:15]
###Output
_____no_output_____
###Markdown
> Tip: Always view the dimensions of your data before analyzing it
###Code
print (f"This dataset is {len(sac_2017_df)} in length and contains {len(sac_2017_df.columns)} columns")
###Output
This dataset is 82 in length and contains 41 columns
###Markdown
Using df.describe() is an easy and useful way to briefly view the distribution of the dataset across all the columns. This dataset is 82 rows in length which makes sense because there are 82 games in a regular season and contains 41 columns
###Code
#collapse-hide
sac_2017_df.iloc[0:5, 0:15].describe()
###Output
_____no_output_____
###Markdown
Merge multi index headers and remove unwanted tagsIn stead of indexing by columns with this notation, ```pythonsac_2017_df[('Unnamed: 0_level_0', 'Rk')]```we need to merge the header columns to allow for this type of indexing ```python sac_2017_df['Rk']``` Lets do a quick magic wave of the hand and merge these headers together Before:
###Code
sac_2017_df.columns[5:15]
merged_columns = sac_2017_df.columns.map('.'.join)
###Output
_____no_output_____
###Markdown
 After:
###Code
#collapse-hide
merged_columns[5:15]
###Output
_____no_output_____
###Markdown
> Note: Lets break that piece of code above down for a sec: sac_2017_df.columns.map('.'.join) is calling the [str.join()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.join.html) function where the str is '.' for each column with the [.map()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html) function Now with the columns merged, we can keep the prefixed descriptions such as Team and Opponent so we know whose stats we're viewing but prefixes like 'Unnamed: 0_level_0' are no use to us. We can use regular expressions to remove the unneeded text in some of our column names
###Code
sac_2017_df.columns = merged_columns.str.replace(r"Unnamed:\ [0-9]_level_[0-9].", '', regex=True)
sac_2017_df.iloc[0:5, 0:15]
###Output
_____no_output_____
###Markdown
There is still an 'Unnmaed: 3_level_1' tag after the regex processing which represents if the team of interest was playing home or away. We won't even be using this column as is so we can just process our new column and drop 'Unnamed: 3_level_1' after.The existing column consists of discreet values 'NaN' or @ indication if the team was playing at home or away for this instance. We can simply check if the row value is NaN using the .isnull() function in pandas and set those values as a new column
###Code
sac_2017_df['playing_home'] = sac_2017_df['Unnamed: 3_level_1'].isnull()
###Output
_____no_output_____
###Markdown
Now that we have our column we can simply drop the existing "Unnamed: 3_level_1" column because "playing_home" represents the same thing now but with true and false values
###Code
sac_2017_df.drop(columns=['Unnamed: 3_level_1'], inplace=True)
sac_2017_df.iloc[0:5, 0:15]
###Output
_____no_output_____
###Markdown
In order to prepare this data for a logistic regression model, we will also need to convert the non-numeric columns we plan to use to numerical values. Specifically converting the column of interest "W/L" to a numeric representation
###Code
sac_2017_df['dub'] = sac_2017_df['W/L'] == 'W'
###Output
_____no_output_____
###Markdown
True values in this new column represent the team of interest got the dub or the Wu as Mastah Killah would say WuTang ATLUnited
###Code
sac_2017_df.iloc[0:5, -10:]
###Output
_____no_output_____
###Markdown
Might as well make a pipelineWe have established, at least, our first pass at preparing the dataset. Since we will have to prepare the other dataframes in a similar way we can mitigate this by creating a data pipeline. This pipeline will take each original dataframe in and run the same preprocessing steps. This ensures everything is going through the same steps. Pipelines are not required but it will help you to stay organized To make a pipeline we'll need to make the previous steps we created into a function to pass each dataframe through
###Code
def data_pipeline(df):
test = df.columns.map('.'.join).str.strip('.')
df.columns = test.str.replace(r"Unnamed:\ [0-9]_level_[0-9].", '', regex=True)
df['playing_home'] = df['Unnamed: 3_level_1'].isnull()
df.drop(columns=['Unnamed: 3_level_1'], inplace=True)
df['dub'] = df['W/L'] == 'W'
df.drop(columns=['W/L'], inplace=True)
return df
###Output
_____no_output_____
###Markdown
Running the pipelineWe can consolidate the number of duplicate lines to run into a function and process all similar datasets with the same function. The code below reads in each dataset and immediately uses the pandas .pipe() function passing in the preprocessing function. Though we didn't use it, the [pandas.DataFrame.pipe()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pipe.html?highlight=pipepandas.DataFrame.pipe) function allows positional and keyword arguments to be passed in with the function to run.
###Code
sac_2017_df = pd.read_csv(sac_2017_szn, header=[0, 1]).pipe(data_pipeline)
sac_2018_df = pd.read_csv(sac_2018_szn, header=[0, 1]).pipe(data_pipeline)
mil_2017_df = pd.read_csv(mil_2017_szn, header=[0, 1]).pipe(data_pipeline)
mil_2018_df = pd.read_csv(mil_2018_szn, header=[0, 1]).pipe(data_pipeline)
###Output
_____no_output_____ |
examples/features.ipynb | ###Markdown
Recipipe Features - SnippetsCollection of simple examples showing the strength of Recipipe.
###Code
import numpy as np
import pandas as pd
import recipipe as r
###Output
_____no_output_____
###Markdown
Named output columnsAfter applying a transformer, output columns have descriptive names.In this example output columns are "color=blue" and "color=red" instead of "color_0" and "color_1".
###Code
df = pd.DataFrame({"color": ["red", "blue"]})
df
r.onehot().fit_transform(df)
###Output
_____no_output_____
###Markdown
Selecting by dtype
###Code
df = pd.DataFrame({"color": ["red", "blue"], "value": [1, 2]})
df
r.onehot(dtype=object).fit_transform(df)
###Output
_____no_output_____
###Markdown
All numbers dtype
###Code
df = pd.DataFrame({"color": ["red", "blue"], "value_int": [1, 2], "value_float": [0.1, 0.2]})
df
r.scale(dtype=np.number).fit_transform(df)
###Output
_____no_output_____
###Markdown
Exclude by dtype
###Code
df = pd.DataFrame({"color": ["red", "blue"], "value_int": [1, 2], "value_float": [0.1, 0.2]})
df
r.scale(dtype=dict(exclude=object)).fit_transform(df)
###Output
_____no_output_____
###Markdown
Select using fnmatchYou can use fnmatch patterns in any Recipipe transformer.
###Code
df = pd.DataFrame({"feature1": [1, 2], "feature2": [3, 4], "id": ["a", "b"]})
df
r.select("feature*").fit_transform(df)
###Output
_____no_output_____
###Markdown
Missing indicator
###Code
df = pd.DataFrame({"feature": [1, np.nan, 2, np.nan]})
df
r.indicator().fit_transform(df)
###Output
_____no_output_____
###Markdown
Extract: text match indicator in several columns
###Code
df = pd.DataFrame(dict(c=["tone", "one", "none", "lone", "all", "al"]))
df
r.extract(pattern=["one", "ll"], indicator=True, col_format="CONTAINS({column},{value})").fit_transform(df)
###Output
_____no_output_____
###Markdown
Extract: date
###Code
df = pd.DataFrame(dict(date=["2012-02", "2013-03"]))
df
r.recipipe([
r.extract(pattern=["(\d*)-"], col_format="year", keep_original=True),
r.extract("date", pattern=["-(\d*)"], col_format="month"),
r.astype(dtypes=int),
]).fit_transform(df)
###Output
_____no_output_____
###Markdown
Apply transformer by group
###Code
df = pd.DataFrame(dict(name=["a", "a", "a", "b", "b"], value=[0, 1, 2, 0, 1]))
df
r.groupby("name", r.minmax("value")).fit_transform(df)
###Output
_____no_output_____
###Markdown
Concat columns
###Code
df = pd.DataFrame(dict(year=[2020, 2020], month=[1, 2]))
df
r.concat(separator="-").fit_transform(df)
###Output
_____no_output_____
###Markdown
Sum columns
###Code
df = pd.DataFrame(dict(points_match_1=[1, 2, 3], points_match_2=[3, 4, 5], match_id=["a", "b", "c"]))
df
r.sum("points_match_*", col_format="points").fit_transform(df)
###Output
_____no_output_____
###Markdown
Type casting
###Code
df = pd.DataFrame(dict(year=["2012", "2013"]))
df
r.astype(dtypes="int").fit_transform(df)
###Output
_____no_output_____
###Markdown
DistArray: Distributed Arrays for Python========================================[docs.enthought.com/distarray](http://docs.enthought.com/distarray) Setup-----Much of this notebook requires an `IPython.parallel` cluster to be running.Outside the notebook, run```dacluster start -n4```
###Code
# some utility imports
from __future__ import print_function
from pprint import pprint
from matplotlib import pyplot as plt
# main imports
import numpy
import distarray
# reduce precision on printed array values
numpy.set_printoptions(precision=2)
# display figures inline
%matplotlib inline
###Output
_____no_output_____
###Markdown
Software Versions
###Code
print("numpy", numpy.__version__)
import matplotlib
print("matplotlib", matplotlib.__version__)
import h5py
print("h5py", h5py.__version__)
print("distarray", distarray.__version__)
###Output
numpy 1.9.3
matplotlib 1.4.3
h5py 2.5.0
distarray 0.6.0-dev
###Markdown
Set a RandomStateSet a `RandomState` so random numpy arrays don't change between runs.
###Code
from numpy.random import RandomState
prng = RandomState(1234567890)
###Output
_____no_output_____
###Markdown
NumPy Arrays------------DistArray is built on NumPy and provides a NumPy-array-like interface. First, let's generate a NumPy array and examine some of its attributes.
###Code
# a 4-row 5-column NumPy array with random contents
nparr = prng.rand(4, 5)
nparr
# NumPy array attributes
print("type:", type(nparr))
print("dtype:", nparr.dtype)
print("ndim:", nparr.ndim)
print("shape:", nparr.shape)
print("itemsize:", nparr.itemsize)
print("nbytes:", nparr.nbytes)
###Output
type: <type 'numpy.ndarray'>
dtype: float64
ndim: 2
shape: (4, 5)
itemsize: 8
nbytes: 160
###Markdown
DistArrays----------We'll make our first `DistArray` out of the NumPy array created above.
###Code
# First we need a `Context` object. More on this later.
# For now, think of this object like the `NumPy` module.
# `Context`s manage the worker engines for us.
from distarray.globalapi import Context
context = Context()
# Make a DistArray from a NumPy array.
# This will push sections of the original NumPy array out
# to the engines.
darr = context.fromarray(nparr)
darr
# Print the array section stored on each engine
for i, a in enumerate(darr.get_localarrays()):
print(i, a)
# DistArrays have similar attributes to NumPy arrays,
print("type:", type(darr))
print("dtype:", darr.dtype)
print("ndim:", darr.ndim)
print("shape:", darr.shape)
print("itemsize:", darr.itemsize)
print("nbytes:", darr.nbytes)
# and some additional attributes.
print("targets:", darr.targets)
print("context:", darr.context)
print("distribution:", darr.distribution)
###Output
targets: [0, 1, 2, 3]
context: <distarray.globalapi.context.IPythonContext object at 0x109482450>
distribution: <distarray.globalapi.maps.Distribution object at 0x10958ba50>
###Markdown
Universal Functions (ufuncs)----------------------------
###Code
# NumPy provides `ufuncs`, or Universal Functions, that operate
# elementwise over NumPy arrays.
numpy.sin(nparr)
# DistArray provides ufuncs as well, for `DistArray`s.
import distarray.globalapi as da
da.sin(darr)
# `toarray` makes a NumPy array out of a DistArray, pulling all of the
# pieces back to the client. We do this to display the contents of the
# DistArray.
da.sin(darr).toarray()
# A NumPy binary ufunc.
nparr + nparr
# The equivalent DistArray ufunc.
# Notice that a new DistArray is created without
# pulling data back to the client.
darr + darr
# Contents of the resulting DistArray.
(darr + darr).toarray()
###Output
_____no_output_____
###Markdown
Reductions----------Functions like `sum`, `mean`, `min`, and `max` are known as *reductions*, since they take an array and produce a smaller array or a scalar. In NumPy and DistArray, some of these functions can be applied over a specific ``axis``.
###Code
# NumPy sum
print("sum:", nparr.sum())
print("sum over an axis:", nparr.sum(axis=1))
# DistArray sum
print("sum:", darr.sum(), darr.sum().toarray())
print("sum over an axis:", darr.sum(axis=1), darr.sum(axis=1).toarray())
###Output
sum: <DistArray(shape=(), targets=[0])> 10.6840911511
sum over an axis: <DistArray(shape=(4,), targets=[0, 1, 2, 3])> [ 3.45 3.12 2.49 1.62]
###Markdown
Indexing and Slicing--------------------DistArrays support standard NumPy Indexing and distributed slicing, including slices with a step. Slicing is currently only supported for Block (and undistributed) DistArrays.
###Code
# Our example array, as a reminder:
darr.toarray()
# The shapes of the local sections of our DistArray
darr.localshapes()
# Return the value of a single element
darr[0, 2]
# Take a column slice
darr_view = darr[:, 3] # all rows, third column
print(darr_view)
print(darr_view.toarray())
# Slices return a new DistArray that is a view on the
# original, just like in NumPy.
# Changes in the view change the original array.
darr_view[3] = -0.99
print("view:")
print(darr_view.toarray())
print("original:")
print(darr.toarray())
# A more complex slice, with negative indices and a step.
print(darr[:, 2::2])
print(darr[:-1, 2::2].toarray())
# Incomplete indexing
# Grab the first row
darr[0]
###Output
_____no_output_____
###Markdown
Distributions-------------Above, when we created a DistArray out of a NumPy array, we didn't specify *how* the elements should be distributed among our engines. `Distribution`s give you control over this, if you want it. In other words, `Distribution`s control which processes own which (global) indices.
###Code
# Let's look at the `Distribution` object that was created for us
# automatically by `fromarray`.
distribution = darr.distribution
# This is a 2D distribution: its 0th dimension is Block-distributed,
# and it's 1st dimension isn't distributed.
pprint(distribution.maps)
# Plot this Distribution, color-coding which process each global index
# belongs to.
from distarray.plotting import plot_array_distribution
process_coords = [(0, 0), (1, 0), (2, 0), (3, 0)]
plot_array_distribution(darr, process_coords, cell_label=False, legend=True)
# Check out which sections of this array's 0th dimension are on
# each process.
distribution.maps[0].bounds
###Output
_____no_output_____
###Markdown
The Distribution above was created for us by `fromarray`,but DistArray lets us specify more complex distributions.Here, we specify that the 0th dimension has a Block distribution ('b')and the 1st dimension has a Cyclic distribution.DistArray supports Block, Cyclic, Block-Cyclic, Unstructured,and No-distribution dimensions. See the[ScaLAPACK Documentation](http://netlib.org/scalapack/slug/node75.html) for more information about Distribution types.
###Code
from distarray.globalapi import Distribution
distribution = Distribution(context, shape=(64, 64), dist=('b', 'c'))
a = context.zeros(distribution, dtype='int32')
plot_array_distribution(a, process_coords, cell_label=False, legend=True)
###Output
_____no_output_____
###Markdown
Redistribution--------------Since `DistArray`s are distributed, the equivalent to NumPy's `reshape` (`distribute_as`) can be a more complex and costly operation. For convenience, you can supply either a `shape` or a full `Distribution` object. Only Block distributions (and No-dist) are currently redistributable.
###Code
darr
darr.toarray()
# simple reshaping
reshaped = darr.distribute_as((10, 2))
reshaped
reshaped.toarray()
# A more complex resdistribution,
# changing shape, dist, and targets
dist = Distribution(context, shape=(5, 4),
dist=('b', 'b'), targets=(1, 3))
darr.distribute_as(dist)
###Output
_____no_output_____
###Markdown
Contexts--------Context objects manage the setup of and communication to the worker processes for DistArray objects. They also act as the namespace to whichDistArray creation functions are attached.
###Code
print("targets:", context.targets)
print("comm:", context.comm)
context.zeros((5, 3))
context.ones((20, 20))
###Output
_____no_output_____
###Markdown
Parallel IO-----------DistArray has support for reading NumPy `.npy` files in parallel, for reading *and* writing `.dnpy` files in parallel (our own flat-file format), and reading and writing HDF5 files in parallel (if you have a parallel build of `h5py`).
###Code
# load .npy files in parallel
numpy.save("/tmp/outfile.npy", nparr)
distribution = Distribution(context, nparr.shape)
new_darr = context.load_npy("/tmp/outfile.npy", distribution)
new_darr
# save to .dnpy (a built-in flat-file format based on .npy)
context.save_dnpy("/tmp/outfile", darr)
# load from .dnpy
context.load_dnpy("/tmp/outfile")
# save DistArrays to .hdf5 files in parallel
context.save_hdf5("/tmp/outfile.hdf5", darr, mode='w')
# load DistArrays from .hdf5 files in parallel (using h5py)
context.load_hdf5("/tmp/outfile.hdf5", distribution)
###Output
_____no_output_____
###Markdown
Context.apply-------------Global view, local control. The `apply` method on a `Context` allows you to write functions that are applied *locally* (that is, on the engines) to each section of a DistArray. This allows you to push your computation close to your data, avoiding communication round-trips and possibly speeding up your computations.
###Code
def get_local_random():
"""Function to be applied locally."""
import numpy
return numpy.random.randint(10)
context.apply(get_local_random)
def get_local_var(darr):
"""Another local computation."""
return darr.ndarray.var()
context.apply(get_local_var, args=(darr.key,))
###Output
_____no_output_____
###Markdown
Context.register----------------`Context.register` is similar to `Context.apply`, but it allows you to *register* your function with a `Context` up front, and then call it repeatedly, with a nice syntax.
###Code
def local_demean(la):
"""Return the local array with the mean removed."""
return la.ndarray - la.ndarray.mean()
context.register(local_demean)
context.local_demean(darr)
###Output
_____no_output_____
###Markdown
MPI-only Execution------------------------- Instead of using an IPython client (which uses ZeroMQ to communicate to the engines), you can run your DistArray code in MPI-only mode (using an extra MPI process for the client). This can be more performant.
###Code
# an example script to run in MPI-only mode
%cd julia_set
!python benchmark_julia.py -h
# Compile kernel.pyx
!python setup.py build_ext --inplace
# Run the benchmarking script with 5 MPI processes:
# 4 worker processes and 1 client process
!mpiexec -np 5 python benchmark_julia.py --kernel=cython -r1 1024
###Output
running build_ext
(n/n_runs: time) ('Start', 'End', 'Dist', 'Resolution', 'c', 'Engines', 'Iters')
(1/17: 0.548s) (1443997400.237683, 1443997400.785785, 'numpy', 1024, '(-0.045+0.45j)', 1, [32763832L])
(2/17: 0.546s) (1443997400.803836, 1443997401.3497, 'b-n', 1024, '(-0.045+0.45j)', 1, [32763832L])
(3/17: 0.544s) (1443997401.374217, 1443997401.917805, 'c-n', 1024, '(-0.045+0.45j)', 1, [32763832L])
(4/17: 0.572s) (1443997401.942615, 1443997402.514608, 'b-b', 1024, '(-0.045+0.45j)', 1, [32763832L])
(5/17: 0.556s) (1443997402.536106, 1443997403.092295, 'c-c', 1024, '(-0.045+0.45j)', 1, [32763832L])
(6/17: 0.304s) (1443997403.125129, 1443997403.428932, 'b-n', 1024, '(-0.045+0.45j)', 2, [16345977L, 16417855L])
(7/17: 0.300s) (1443997403.454983, 1443997403.754604, 'c-n', 1024, '(-0.045+0.45j)', 2, [16382826L, 16381006L])
(8/17: 0.289s) (1443997403.766794, 1443997404.05549, 'b-b', 1024, '(-0.045+0.45j)', 2, [16354361L, 16409471L])
(9/17: 0.279s) (1443997404.070021, 1443997404.349003, 'c-c', 1024, '(-0.045+0.45j)', 2, [16384932L, 16378900L])
(10/17: 0.352s) (1443997404.371453, 1443997404.723861, 'b-n', 1024, '(-0.045+0.45j)', 3, [6243924L, 20326746L, 6193162L])
(11/17: 0.181s) (1443997404.742758, 1443997404.924105, 'c-n', 1024, '(-0.045+0.45j)', 3, [10922548L, 10921645L, 10919639L])
(12/17: 0.313s) (1443997404.969199, 1443997405.282438, 'b-b', 1024, '(-0.045+0.45j)', 3, [6779920L, 19264315L, 6719597L])
(13/17: 0.182s) (1443997405.296457, 1443997405.4787, 'c-c', 1024, '(-0.045+0.45j)', 3, [10921355L, 10920452L, 10922025L])
(14/17: 0.236s) (1443997405.555126, 1443997405.790802, 'b-n', 1024, '(-0.045+0.45j)', 4, [2725843L, 13620134L, 13659620L, 2758235L])
(15/17: 0.145s) (1443997405.810818, 1443997405.955335, 'c-n', 1024, '(-0.045+0.45j)', 4, [8190582L, 8190503L, 8192244L, 8190503L])
(16/17: 0.191s) (1443997405.975453, 1443997406.166614, 'b-b', 1024, '(-0.045+0.45j)', 4, [5859333L, 10486644L, 10495028L, 5922827L])
(17/17: 0.162s) (1443997406.179624, 1443997406.341143, 'c-c', 1024, '(-0.045+0.45j)', 4, [8193330L, 8189496L, 8191602L, 8189404L])
###Markdown
Distributed Array Protocol--------------------------Already have a library with its own distributed arrays? Use the Distributed Array Protocol to work with DistArray.The Distributed Array Protocol (DAP) is a process-local protocol that allows two subscribers, called the "producer" and the "consumer" or the "exporter" and the "importer", to communicate the essential data and metadata necessary to share a distributed-memory array between them. This allows two independently developed components to access, modify, and update a distributed array without copying. The protocol formalizes the metadata and buffers involved in the transfer, allowing several distributed array projects to collaborate, facilitating interoperability. By not copying the underlying array data, the protocol allows for efficient sharing of array data.http://distributed-array-protocol.readthedocs.org/en/rel-0.9.0/
###Code
def return_protocol_structure(la):
return la.__distarray__()
context.apply(return_protocol_structure, (darr.key,))
###Output
_____no_output_____
###Markdown
Features==========*TableDataExtractor* uses a variety of algorithms to represent a table in standardized format. They work independently of the input format in which the table was provided. Thus, *TableDataExtractor* works equally as good for `.csv` files, as for `.html` files. Standardized Table----------------------------The main feature of *TableDataExtractor* is the standardization of the input table. All algorithms and features presented herein have the goal to create a higher quality standardized table. This can subsequenlty be used for automated parsing, and automated retrieval of information from the table.The standardized table (*category table*) can be output as a list as `table.category_table` or simply printed with `print(table)`.Table example from Embley et. al (2016):
###Code
from tabledataextractor import Table
file = '../examples/tables/table_example_footnotes.csv'
table = Table(file)
table.print_raw_table()
print(table)
###Output
1 Development
Country Million dollar Million dollar Million dollar Percentage of GNI Percentage of GNI
2007 2010* 2011* a. 2007 2011
First table
Australia 3735 4580 4936 0.95 1
Greece 2669 3826 4799 0.32 0.35
New Zealand 320 342 429 0.27 0.28
OECD/DAC c 104206 128465 133526 0.27 0.31
c (unreliable)
* world bank
a.
+--------+---------------------------+-------------------------------------------+
| Data | Row Categories | Column Categories |
+--------+---------------------------+-------------------------------------------+
| 3735 | ['Australia'] | ['Million dollar', '2007'] |
| 4580 | ['Australia'] | ['Million dollar', '2010 world bank '] |
| 4936 | ['Australia'] | ['Million dollar', '2011 world bank '] |
| 0.95 | ['Australia'] | ['Percentage of GNI', '2007'] |
| 1 | ['Australia'] | ['Percentage of GNI', '2011'] |
| 2669 | ['Greece'] | ['Million dollar', '2007'] |
| 3826 | ['Greece'] | ['Million dollar', '2010 world bank '] |
| 4799 | ['Greece'] | ['Million dollar', '2011 world bank '] |
| 0.32 | ['Greece'] | ['Percentage of GNI', '2007'] |
| 0.35 | ['Greece'] | ['Percentage of GNI', '2011'] |
| 320 | ['New Zealand'] | ['Million dollar', '2007'] |
| 342 | ['New Zealand'] | ['Million dollar', '2010 world bank '] |
| 429 | ['New Zealand'] | ['Million dollar', '2011 world bank '] |
| 0.27 | ['New Zealand'] | ['Percentage of GNI', '2007'] |
| 0.28 | ['New Zealand'] | ['Percentage of GNI', '2011'] |
| 104206 | ['OECD/DAC (unreliable)'] | ['Million dollar', '2007'] |
| 128465 | ['OECD/DAC (unreliable)'] | ['Million dollar', '2010 world bank '] |
| 133526 | ['OECD/DAC (unreliable)'] | ['Million dollar', '2011 world bank '] |
| 0.27 | ['OECD/DAC (unreliable)'] | ['Percentage of GNI', '2007'] |
| 0.31 | ['OECD/DAC (unreliable)'] | ['Percentage of GNI', '2011'] |
+--------+---------------------------+-------------------------------------------+
###Markdown
Nested Headers and Cell Labelling-------------------------------------------------The *data region* of an input table is isolated, taking complex row/column header structures into account and preserving the information about which header categories a particular data point belongs to. The table cells are labelled, according to their role in the table, as *Data*, *Row Header*, *Column Header*, *Stub Header*, *Title*, *Footnote*, *Footnote Text*, and *Note* cells.
###Code
from tabledataextractor.output.print import print_table
table.print_raw_table()
print_table(table.labels)
###Output
1 Development
Country Million dollar Million dollar Million dollar Percentage of GNI Percentage of GNI
2007 2010* 2011* a. 2007 2011
First table
Australia 3735 4580 4936 0.95 1
Greece 2669 3826 4799 0.32 0.35
New Zealand 320 342 429 0.27 0.28
OECD/DAC c 104206 128465 133526 0.27 0.31
c (unreliable)
* world bank
a.
TableTitle TableTitle TableTitle TableTitle TableTitle TableTitle
StubHeader ColHeader ColHeader ColHeader ColHeader ColHeader
StubHeader ColHeader ColHeader & FNref ColHeader & FNref & FNref ColHeader ColHeader
Note / / / / /
RowHeader Data Data Data Data Data
RowHeader Data Data Data Data Data
RowHeader Data Data Data Data Data
RowHeader & FNref Data Data Data Data Data
FNprefix FNtext / / / /
FNprefix & FNtext / / / / /
FNprefix / / / / /
###Markdown
Prefixing of headers-----------------------------In many tables the headers are non-conclusive, meaning that they include duplicate elements that are usually highlighted in bold or italics. Due to the highlighting the structure of the table can still be understood by the reader. However, since *TableDataExtractor* doesn't take any graphical features into account, but only consideres the raw content of cells in tabular format, a *prefixing* step needs to be performed in some cases to find the header region correctly.Since the main algorithm used to find the data region, the MIPS algorithm (*Minimum Indexing Point Search*), relies on duplicate entries in the header regions, the prefixing step is done in an iterative fashion. First, the headers are found and only afterwards the prefixing is performed. By comparison of the new results before and after a decision is made whether to accept the prefixing or not.Two examples of prefixing are shown below, for the column and row header, respectively (examples from Embley et. al 2016).Prefixing can be turned off by setting the `use_prefixing = False` keyword argument upon creation of the `Table` instance.
###Code
file = '../examples/tables/table_example8.csv'
table = Table(file)
table.print()
file = '../examples/tables/table_example9.csv'
table = Table(file)
table.print()
###Output
Year 2003 2004
Short messages/thousands 1647218 2193498
Change % 24.3 33.2
Other messages 347 439
Multimedia messages/thousands 2314 7386
Change % 219.2
Year 2003 2004
Short messages/thousands 1647218 2193498
Short messages/thousands Change % 24.3 33.2
Other messages 347 439
Multimedia messages/thousands 2314 7386
Multimedia messages/thousands Change % 219.2
StubHeader StubHeader ColHeader ColHeader
RowHeader RowHeader Data Data
RowHeader RowHeader Data Data
RowHeader RowHeader Data Data
RowHeader RowHeader Data Data
RowHeader RowHeader Data Data
###Markdown
Spanning cells----------------------Spanning cells are commonly encountered in tables.This information is easy to retreive if the table is provided in `.html` format. However, if the table is provided as `.csv` file or a python list, the content of spannig cells needs to be duplicated into each one of the spanning cells. *TableDataExtractor* does that automatically.The duplication of spanning cells can be turned off by setting `use_spanning_cells = False` at creation of the `Table` instance.Table example from Embley et. al (2016):
###Code
file = '../examples/tables/te_04.csv'
table = Table(file)
table.print()
###Output
Pupils in comprehensive schools
Year School Pupils Grade 1 Leaving certificates
Pre-primary Grades Additional Total
6 Jan 9 Jul
1990 4869 2189 389410 197719 592920 67427 61054
1991 4861 2181 389411 197711 3601 592921 67421
Pupils in comprehensive schools
Year School Pupils Pupils Pupils Pupils Pupils Grade 1 Leaving certificates
Year School Pre-primary Grades Grades Additional Total Grade 1 Leaving certificates
Year School Pre-primary 6 Jan 9 Jul Additional Total Grade 1 Leaving certificates
1990 4869 2189 389410 197719 592920 67427 61054
1991 4861 2181 389411 197711 3601 592921 67421
TableTitle TableTitle TableTitle TableTitle TableTitle TableTitle TableTitle TableTitle TableTitle
StubHeader ColHeader ColHeader ColHeader ColHeader ColHeader ColHeader ColHeader ColHeader
StubHeader ColHeader ColHeader ColHeader ColHeader ColHeader ColHeader ColHeader ColHeader
StubHeader ColHeader ColHeader ColHeader ColHeader ColHeader ColHeader ColHeader ColHeader
RowHeader Data Data Data Data Data Data Data Data
RowHeader Data Data Data Data Data Data Data Data
###Markdown
Subtables---------------If there are many tables nested within a single input table, and if they are of a compatible header structure, *TableDataExtractor* will automatically process them. `table.subtables` will contain a list of those subtables, where each entry will be an instance of the *TableDataExtractor* `Table` class.
###Code
file = '../examples/tables/te_06.csv'
table = Table(file)
table.print_raw_table()
table.subtables[0].print_raw_table()
table.subtables[1].print_raw_table()
table.subtables[2].print_raw_table()
###Output
Material Tc A Material Tc A Material Tc
Bi6Tl3 6.5 x TiN 1.4 y TiO2 1.1
Sb2Tl7 5.5 y TiC 1.1 x TiO3 1.2
Na2Pb5 7.2 z TaC 9.2 x TiO4 1.3
Hg5Tl7 3.8 x NbC 10.1 a TiO5 1.4
Au2Bi 1.84 x ZrB 2.82 x TiO6 1.5
CuS 1.6 x TaSi 4.2 x TiO7 1.6
VN 1.3 x PbS 4.1 x TiO8 1.7
WC 2.8 x Pb-As alloy 8.4 x TiO9 1.8
W2C 2.05 x Pb-Sn-Bi 8.5 x TiO10 1.9
MoC 7.7 x Pb-As-Bi 9.0 x TiO11 1.10
Mo2C 2.4 x Pb-Bi-Sb 8.9 x TiO12 1.11
Material Tc A
Bi6Tl3 6.5 x
Sb2Tl7 5.5 y
Na2Pb5 7.2 z
Hg5Tl7 3.8 x
Au2Bi 1.84 x
CuS 1.6 x
VN 1.3 x
WC 2.8 x
W2C 2.05 x
MoC 7.7 x
Mo2C 2.4 x
Material Tc A
TiN 1.4 y
TiC 1.1 x
TaC 9.2 x
NbC 10.1 a
ZrB 2.82 x
TaSi 4.2 x
PbS 4.1 x
Pb-As alloy 8.4 x
Pb-Sn-Bi 8.5 x
Pb-As-Bi 9.0 x
Pb-Bi-Sb 8.9 x
Material Tc
TiO2 1.1
TiO3 1.2
TiO4 1.3
TiO5 1.4
TiO6 1.5
TiO7 1.6
TiO8 1.7
TiO9 1.8
TiO10 1.9
TiO11 1.10
TiO12 1.11
###Markdown
Footnotes-------------------*TableDataExtractor* handles footnotes by copying the footnote text into the appropriate cells where the footnotes have been referenced. This is a useful feature for automatic parsing of the *category table*. The copying of the footnote text can be prevented by using the `use_footnotes = False` keyword argument on `Table` creation.Each footnote is a `TableDataExtractor.Footnote` object that contains all the footnote-relevant information. It can be inspected with `print(table.footnotes[0])`.Table example from Embley et. al (2016):
###Code
file = '../examples/tables/table_example_footnotes.csv'
table = Table(file)
table.print()
print(table.footnotes[0])
print(table.footnotes[1])
print(table.footnotes[2])
###Output
1 Development
Country Million dollar Million dollar Million dollar Percentage of GNI Percentage of GNI
2007 2010* 2011* a. 2007 2011
First table
Australia 3735 4580 4936 0.95 1
Greece 2669 3826 4799 0.32 0.35
New Zealand 320 342 429 0.27 0.28
OECD/DAC c 104206 128465 133526 0.27 0.31
c (unreliable)
* world bank
a.
1 Development
Country Million dollar Million dollar Million dollar Percentage of GNI Percentage of GNI
Country 2007 2010 world bank 2011 world bank 2007 2011
First table
Australia 3735 4580 4936 0.95 1
Greece 2669 3826 4799 0.32 0.35
New Zealand 320 342 429 0.27 0.28
OECD/DAC (unreliable) 104206 128465 133526 0.27 0.31
c (unreliable)
* world bank
a.
TableTitle TableTitle TableTitle TableTitle TableTitle TableTitle
StubHeader ColHeader ColHeader ColHeader ColHeader ColHeader
StubHeader ColHeader ColHeader & FNref ColHeader & FNref & FNref ColHeader ColHeader
Note / / / / /
RowHeader Data Data Data Data Data
RowHeader Data Data Data Data Data
RowHeader Data Data Data Data Data
RowHeader & FNref Data Data Data Data Data
FNprefix FNtext / / / /
FNprefix & FNtext / / / / /
FNprefix / / / / /
Prefix: 'c' Text: '(unreliable)' Ref. Cells: [(7, 0)] References: ['OECD/DAC c']
Prefix: '*' Text: 'world bank' Ref. Cells: [(2, 2), (2, 3)] References: ['2010*', '2011* a.']
Prefix: 'a.' Text: '' Ref. Cells: [(2, 3)] References: ['2011 world bank a.']
|
content/icebox/See the Matrix.ipynb | ###Markdown
- title: See the Matrix- summary: Everything is numbers- date: 2019-02-21- image: /static/images/tensor.jpeg- status: draft This post begins a series on how artificial neural networks work, from scratch, with code. (Python)I will assume only that you don't vomit at the sight of an unfamiliar equation, and that you have some high school math and a little programming experience. We'll approach neural networks in small steps. This time, we're talking about how _everything can be represented as numbers_. Computers see differently
###Code
from PIL import Image # Pillow, a Python Image Library fork
import numpy as np # NumPy lets us do vector math fast
import requests # for downloading kittens
from io import BytesIO # for processing kittens
###Output
_____no_output_____
###Markdown
Here is one way to represent a kitten:
###Code
url = 'https://computable.ai/static/images/kitten.jpg'
response = requests.get(url)
kitten = Image.open(BytesIO(response.content))
kitten
###Output
_____no_output_____
###Markdown
The computer sees it differently, though. The computer sees a nested array of numbers, like this: TensorsA nested array of numbers like that is called a _tensor_. What we're looking at here is- an array of each of the _rows_ of pixels in the image,- where each one of those rows is actually another array, this time an array of all of the _pixels_ in that row, - and each pixel is also actually another array containing three numbers between 0 and 255, Red, Green, and Blue. (So R, G, and B in the drawing would actually be numbers representing _how_ red, _how much_ green, _what intensity_ of blue, for a given pixel).A tensor is just a boxy (no inconsistent array lengths) nested array of numbers, and it represents the kitten just as well as the picture that you can admire. You may have heard the term _matrix_. Well, a matrix is just a *two*-dimensional tensor. Tensors can have as many dimensions as you like. How the computer sees the kittenBack to the kitten. Here is the tensor representation of the same kitten we saw earlier:
###Code
tensor = np.array(kitten)
tensor
###Output
_____no_output_____
###Markdown
NumPy saves screen space by leaving out bits of it, (hence the ...s) but the structure is the same as in the drawing. You're still seeing an array of arrays of arrays of numbers, only there are more of them in the kitten tensor than in the drawing because there are more pixels in the kitten image.How many more?
###Code
tensor.shape
###Output
_____no_output_____
###Markdown
The outermost array has 213 items in it. Each of those items is an array with 320 items in it. Each of _those_ items is an array with 3 items in it. This makes sense because the kitten image is 213 pixels high by 320 pixels wide, and each pixel is composed of three colors. Inspecting the tensorWould you say there's a lot of red in that lower-right pixel of the kitten image? Let's see:
###Code
tensor[212,319]
###Output
_____no_output_____
###Markdown
233 out of 255 is a pretty high intensity of red. More red than in the upper-left pixel, which looks kinda brownish to me:
###Code
tensor[0,0]
###Output
_____no_output_____
###Markdown
I would expect almost no red from that nearly-black dark patch of the background, half way down on the far left:
###Code
tensor[106,0]
###Output
_____no_output_____
###Markdown
You get the idea. Going back and forth for convenienceWe can play around with the tensor representation, and then turn it back into an image again. For instance, let's slice out 100 rows from the middle of the image and display that.
###Code
Image.fromarray(tensor[100:200])
###Output
_____no_output_____
###Markdown
Now let's try zeroing out the red color channel for a little box around kitty's face, then crop out the middle column of pixels, and display the result.
###Code
tensor[60:160,100:200,0] = 0 # remove red in range of rows and columns
Image.fromarray(tensor[:,50:240]) # crop whole image and display
###Output
_____no_output_____ |
1. Airbnb data first look.ipynb | ###Markdown
Airbnb - Boston and Seattle Data First look In this article I would like to analyze the Airbnb data of Boston and Seattle to get answers for below questions:1. Price comparison between Seattle and Boston2. Price trend over time3. Relation between price and various features of the property.4. Prediction of price for a new listing In this notebook, we will examine the data set and understand its features.
###Code
# Import all necessary packages
import numpy as np
import pandas as pd
#Increasing the display rows to see more records for better understanding of data
pd.set_option('display.max_rows', 500)
# Load Seattle Airbnb data
seattle_calendar = pd.read_csv('Seattle\Calendar.csv')
seattle_listings = pd.read_csv('Seattle\listings.csv')
# Load Boston Airbnb data
boston_calendar = pd.read_csv('Boston\Calendar.csv')
boston_listings = pd.read_csv('Boston\listings.csv')
# Analyse the data
boston_calendar.head()
boston_listings.head()
boston_calendar.shape
boston_listings.shape
# Compare the data types of Seattle and Boston data
print(seattle_calendar.dtypes == boston_calendar.dtypes)
# boston listings data set has three extra columns than Seattle so we can drop them
boston_listings.drop(columns = ['access', 'interaction', 'house_rules'], axis = 0, inplace = True)
print(seattle_listings.dtypes == boston_listings.dtypes)
# In above results we could see format of 5 fields are different, lets dig in
boston_listings['host_listings_count'].value_counts()
seattle_listings['host_listings_count'].value_counts()
boston_listings['license'].value_counts()
seattle_listings['license'].value_counts()
seattle_calendar.head()
seattle_listings.head()
###Output
_____no_output_____ |
7 Syllabification, prosody, phonetics.ipynb | ###Markdown
Latin syllables
###Code
# See http://docs.cltk.org/en/latest/latin.html#syllabifier
from cltk.stem.latin.syllabifier import Syllabifier
cato_agri_praef = "Est interdum praestare mercaturis rem quaerere, nisi tam periculosum sit, et item foenerari, si tam honestum. Maiores nostri sic habuerunt et ita in legibus posiverunt: furem dupli condemnari, foeneratorem quadrupli. Quanto peiorem civem existimarint foeneratorem quam furem, hinc licet existimare. Et virum bonum quom laudabant, ita laudabant: bonum agricolam bonumque colonum; amplissime laudari existimabatur qui ita laudabatur. Mercatorem autem strenuum studiosumque rei quaerendae existimo, verum, ut supra dixi, periculosum et calamitosum. At ex agricolis et viri fortissimi et milites strenuissimi gignuntur, maximeque pius quaestus stabilissimusque consequitur minimeque invidiosus, minimeque male cogitantes sunt qui in eo studio occupati sunt. Nunc, ut ad rem redeam, quod promisi institutum principium hoc erit."
from cltk.tokenize.word import WordTokenizer
word_tokenizer = WordTokenizer('latin')
cato_cltk_word_tokens = word_tokenizer.tokenize(cato_agri_praef.lower())
cato_cltk_word_tokens_no_punt = [token for token in cato_cltk_word_tokens if token not in ['.', ',', ':', ';']]
# Now you can see the word "-que"
print(cato_cltk_word_tokens_no_punt)
syllabifier = Syllabifier()
for word in cato_cltk_word_tokens_no_punt:
syllables = syllabifier.syllabify(word)
print(word, syllables)
###Output
est ['est']
interdum ['in', 'ter', 'dum']
praestare ['praes', 'ta', 're']
mercaturis ['mer', 'ca', 'tu', 'ris']
rem ['rem']
quaerere ['quae', 're', 're']
nisi ['ni', 'si']
tam ['tam']
periculosum ['pe', 'ri', 'cu', 'lo', 'sum']
sit ['sit']
et ['et']
item ['i', 'tem']
foenerari ['foe', 'ne', 'ra', 'ri']
si ['si']
tam ['tam']
honestum ['ho', 'nes', 'tum']
maiores ['ma', 'io', 'res']
nostri ['nos', 'tri']
sic ['sic']
habuerunt ['ha', 'bu', 'e', 'runt']
et ['et']
ita ['i', 'ta']
in ['in']
legibus ['le', 'gi', 'bus']
posiverunt ['po', 'si', 've', 'runt']
furem ['fu', 'rem']
dupli ['du', 'pli']
condemnari ['con', 'dem', 'na', 'ri']
foeneratorem ['foe', 'ne', 'ra', 'to', 'rem']
quadrupli ['qua', 'dru', 'pli']
quanto ['quan', 'to']
peiorem ['peio', 'rem']
civem ['ci', 'vem']
existimarint ['ex', 'is', 'ti', 'ma', 'rint']
foeneratorem ['foe', 'ne', 'ra', 'to', 'rem']
quam ['quam']
furem ['fu', 'rem']
hinc ['hinc']
licet ['li', 'cet']
existimare ['ex', 'is', 'ti', 'ma', 're']
et ['et']
virum ['vi', 'rum']
bonum ['bo', 'num']
quom ['quom']
laudabant ['lau', 'da', 'bant']
ita ['i', 'ta']
laudabant ['lau', 'da', 'bant']
bonum ['bo', 'num']
agricolam ['a', 'gri', 'co', 'lam']
bonum ['bo', 'num']
-que ['-que']
colonum ['co', 'lo', 'num']
amplissime ['am', 'plis', 'si', 'me']
laudari ['lau', 'da', 'ri']
existimabatur ['ex', 'is', 'ti', 'ma', 'ba', 'tur']
qui ['qui']
ita ['i', 'ta']
laudabatur ['lau', 'da', 'ba', 'tur']
mercatorem ['mer', 'ca', 'to', 'rem']
autem ['au', 'tem']
strenuum ['stre', 'nu', 'um']
studiosum ['stu', 'di', 'o', 'sum']
-que ['-que']
rei ['rei']
quaerendae ['quae', 'ren', 'dae']
existimo ['ex', 'is', 'ti', 'mo']
verum ['ve', 'rum']
ut ['ut']
supra ['su', 'pra']
dixi ['di', 'xi']
periculosum ['pe', 'ri', 'cu', 'lo', 'sum']
et ['et']
calamitosum ['ca', 'la', 'mi', 'to', 'sum']
at ['at']
ex ['ex']
agricolis ['a', 'gri', 'co', 'lis']
et ['et']
viri ['vi', 'ri']
fortissimi ['for', 'tis', 'si', 'mi']
et ['et']
milites ['mi', 'li', 'tes']
strenuissimi ['stre', 'nu', 'is', 'si', 'mi']
gignuntur ['gig', 'nun', 'tur']
maxime ['ma', 'xi', 'me']
-que ['-que']
pius ['pi', 'us']
quaestus ['quaes', 'tus']
stabilissimus ['sta', 'bi', 'lis', 'si', 'mus']
-que ['-que']
consequitur ['con', 'se', 'qui', 'tur']
minime ['mi', 'ni', 'me']
-que ['-que']
invidiosus ['in', 'vi', 'di', 'o', 'sus']
minime ['mi', 'ni', 'me']
-que ['-que']
male ['ma', 'le']
cogitantes ['co', 'gi', 'tan', 'tes']
sunt ['sunt']
qui ['qui']
in ['in']
eo ['e', 'o']
studio ['stu', 'di', 'o']
occupati ['oc', 'cu', 'pa', 'ti']
sunt ['sunt']
nunc ['nunc']
ut ['ut']
ad ['ad']
rem ['rem']
redeam ['re', 'de', 'am']
quod ['quod']
promisi ['pro', 'mi', 'si']
institutum ['in', 'sti', 'tu', 'tum']
principium ['prin', 'ci', 'pi', 'um']
hoc ['hoc']
erit ['e', 'rit']
###Markdown
Latin prosodyThis is a two-step process: first find the long vowels, then scan the actual meter.
###Code
# Use the macronizer
# See http://docs.cltk.org/en/latest/latin.html#macronizer
from cltk.prosody.latin.macronizer import Macronizer
macronizer = Macronizer('tag_ngram_123_backoff')
text = 'Quo usque tandem, O Catilina, abutere nostra patientia?'
scanned_text = macronizer.macronize_text(text)
# Use the scanner
# See http://docs.cltk.org/en/latest/latin.html#prosody-scanning
from cltk.prosody.latin.scanner import Scansion
scanner = Scansion()
prose_text = macronizer.macronize_tags(scanned_text)
print(prose_text)
###Output
[('quō', None, 'quō'), ('usque', 'd--------', 'usque'), ('tandem', 'd--------', 'tandem'), (',', 'u--------', ','), ('ō', None, 'ō'), ('catilīnā', None, 'catilīnā'), (',', 'u--------', ','), ('abūtēre', None, 'abūtēre'), ('nostrā', None, 'nostrā'), ('patientia', 'n-s---fn-', 'patientia'), ('?', None, '?')]
###Markdown
Greek scansion
###Code
from cltk.prosody.greek.scanner import Scansion
scanner = Scansion()
scanner.scan_text('νέος μὲν καὶ ἄπειρος, δικῶν ἔγωγε ἔτι. μὲν καὶ ἄπειρος.')
###Output
_____no_output_____
###Markdown
Syllables
###Code
# http://docs.cltk.org/en/latest/latin.html#syllabifier
from cltk.stem.latin.syllabifier import Syllabifier
cato_agri_praef = "Est interdum praestare mercaturis rem quaerere, nisi tam periculosum sit, et item foenerari, si tam honestum. Maiores nostri sic habuerunt et ita in legibus posiverunt: furem dupli condemnari, foeneratorem quadrupli. Quanto peiorem civem existimarint foeneratorem quam furem, hinc licet existimare. Et virum bonum quom laudabant, ita laudabant: bonum agricolam bonumque colonum; amplissime laudari existimabatur qui ita laudabatur. Mercatorem autem strenuum studiosumque rei quaerendae existimo, verum, ut supra dixi, periculosum et calamitosum. At ex agricolis et viri fortissimi et milites strenuissimi gignuntur, maximeque pius quaestus stabilissimusque consequitur minimeque invidiosus, minimeque male cogitantes sunt qui in eo studio occupati sunt. Nunc, ut ad rem redeam, quod promisi institutum principium hoc erit."
from cltk.tokenize.word import WordTokenizer
word_tokenizer = WordTokenizer('latin')
cato_cltk_word_tokens = word_tokenizer.tokenize(cato_agri_praef.lower())
cato_cltk_word_tokens_no_punt = [token for token in cato_cltk_word_tokens if token not in ['.', ',', ':', ';']]
# now you can see the word '-que'
print(cato_cltk_word_tokens_no_punt)
syllabifier = Syllabifier()
for word in cato_cltk_word_tokens_no_punt:
syllables = syllabifier.syllabify(word)
print(word, syllables)
###Output
est ['est']
interdum ['in', 'ter', 'dum']
praestare ['praes', 'ta', 're']
mercaturis ['mer', 'ca', 'tu', 'ris']
rem ['rem']
quaerere ['quae', 're', 're']
nisi ['ni', 'si']
tam ['tam']
periculosum ['pe', 'ri', 'cu', 'lo', 'sum']
sit ['sit']
et ['et']
item ['i', 'tem']
foenerari ['foe', 'ne', 'ra', 'ri']
si ['si']
tam ['tam']
honestum ['ho', 'nes', 'tum']
maiores ['ma', 'io', 'res']
nostri ['nos', 'tri']
sic ['sic']
habuerunt ['ha', 'bu', 'e', 'runt']
et ['et']
ita ['i', 'ta']
in ['in']
legibus ['le', 'gi', 'bus']
posiverunt ['po', 'si', 've', 'runt']
furem ['fu', 'rem']
dupli ['du', 'pli']
condemnari ['con', 'dem', 'na', 'ri']
foeneratorem ['foe', 'ne', 'ra', 'to', 'rem']
quadrupli ['qua', 'dru', 'pli']
quanto ['quan', 'to']
peiorem ['peio', 'rem']
civem ['ci', 'vem']
existimarint ['ex', 'is', 'ti', 'ma', 'rint']
foeneratorem ['foe', 'ne', 'ra', 'to', 'rem']
quam ['quam']
furem ['fu', 'rem']
hinc ['hinc']
licet ['li', 'cet']
existimare ['ex', 'is', 'ti', 'ma', 're']
et ['et']
virum ['vi', 'rum']
bonum ['bo', 'num']
quom ['quom']
laudabant ['lau', 'da', 'bant']
ita ['i', 'ta']
laudabant ['lau', 'da', 'bant']
bonum ['bo', 'num']
agricolam ['a', 'gri', 'co', 'lam']
bonum ['bo', 'num']
-que ['-que']
colonum ['co', 'lo', 'num']
amplissime ['am', 'plis', 'si', 'me']
laudari ['lau', 'da', 'ri']
existimabatur ['ex', 'is', 'ti', 'ma', 'ba', 'tur']
qui ['qui']
ita ['i', 'ta']
laudabatur ['lau', 'da', 'ba', 'tur']
mercatorem ['mer', 'ca', 'to', 'rem']
autem ['au', 'tem']
strenuum ['stre', 'nu', 'um']
studiosum ['stu', 'di', 'o', 'sum']
-que ['-que']
rei ['rei']
quaerendae ['quae', 'ren', 'dae']
existimo ['ex', 'is', 'ti', 'mo']
verum ['ve', 'rum']
ut ['ut']
supra ['su', 'pra']
dixi ['di', 'xi']
periculosum ['pe', 'ri', 'cu', 'lo', 'sum']
et ['et']
calamitosum ['ca', 'la', 'mi', 'to', 'sum']
at ['at']
ex ['ex']
agricolis ['a', 'gri', 'co', 'lis']
et ['et']
viri ['vi', 'ri']
fortissimi ['for', 'tis', 'si', 'mi']
et ['et']
milites ['mi', 'li', 'tes']
strenuissimi ['stre', 'nu', 'is', 'si', 'mi']
gignuntur ['gig', 'nun', 'tur']
maxime ['ma', 'xi', 'me']
-que ['-que']
pius ['pi', 'us']
quaestus ['quaes', 'tus']
stabilissimus ['sta', 'bi', 'lis', 'si', 'mus']
-que ['-que']
consequitur ['con', 'se', 'qui', 'tur']
minime ['mi', 'ni', 'me']
-que ['-que']
invidiosus ['in', 'vi', 'di', 'o', 'sus']
minime ['mi', 'ni', 'me']
-que ['-que']
male ['ma', 'le']
cogitantes ['co', 'gi', 'tan', 'tes']
sunt ['sunt']
qui ['qui']
in ['in']
eo ['e', 'o']
studio ['stu', 'di', 'o']
occupati ['oc', 'cu', 'pa', 'ti']
sunt ['sunt']
nunc ['nunc']
ut ['ut']
ad ['ad']
rem ['rem']
redeam ['re', 'de', 'am']
quod ['quod']
promisi ['pro', 'mi', 'si']
institutum ['in', 'sti', 'tu', 'tum']
principium ['prin', 'ci', 'pi', 'um']
hoc ['hoc']
erit ['e', 'rit']
###Markdown
ProsodyTakes two steps: first find long vowels, then scan actual meter
###Code
# macronizer
# http://docs.cltk.org/en/latest/latin.html#macronizer
from cltk.prosody.latin.macronizer import Macronizer
macronizer = Macronizer('tag_ngram_123_backoff')
text = 'Quo usque tandem, O Catilina, abutere nostra patientia?'
scanned_text = macronizer.macronize_text(text)
# scanner
# http://docs.cltk.org/en/latest/latin.html#prosody-scanning
from cltk.prosody.latin.scanner import Scansion
scanner = Scansion()
prose_text = macronizer.macronize_tags(scanned_text)
print(prose_text)
###Output
[('quō', None, 'quō'), ('usque', 'd--------', 'usque'), ('tandem', 'd--------', 'tandem'), (',', 'u--------', ','), ('ō', None, 'ō'), ('catilīnā', None, 'catilīnā'), (',', 'u--------', ','), ('abūtēre', None, 'abūtēre'), ('nostrā', None, 'nostrā'), ('patientia', 'n-s---fn-', 'patientia'), ('?', None, '?')]
###Markdown
Syllables
###Code
# See http://docs.cltk.org/en/latest/latin.html#syllabifier
from cltk.stem.latin.syllabifier import Syllabifier
cato_agri_praef = "Est interdum praestare mercaturis rem quaerere, nisi tam periculosum sit, et item foenerari, si tam honestum. Maiores nostri sic habuerunt et ita in legibus posiverunt: furem dupli condemnari, foeneratorem quadrupli. Quanto peiorem civem existimarint foeneratorem quam furem, hinc licet existimare. Et virum bonum quom laudabant, ita laudabant: bonum agricolam bonumque colonum; amplissime laudari existimabatur qui ita laudabatur. Mercatorem autem strenuum studiosumque rei quaerendae existimo, verum, ut supra dixi, periculosum et calamitosum. At ex agricolis et viri fortissimi et milites strenuissimi gignuntur, maximeque pius quaestus stabilissimusque consequitur minimeque invidiosus, minimeque male cogitantes sunt qui in eo studio occupati sunt. Nunc, ut ad rem redeam, quod promisi institutum principium hoc erit."
from cltk.tokenize.word import WordTokenizer
word_tokenizer = WordTokenizer('latin')
cato_cltk_word_tokens = word_tokenizer.tokenize(cato_agri_praef.lower())
cato_cltk_word_tokens_no_punt = [token for token in cato_cltk_word_tokens if token not in ['.', ',', ':', ';']]
# Now you can see the word "-que"
print(cato_cltk_word_tokens_no_punt)
syllabifier = Syllabifier()
for word in cato_cltk_word_tokens_no_punt:
syllables = syllabifier.syllabify(word)
print(word, syllables)
###Output
est ['est']
interdum ['in', 'ter', 'dum']
praestare ['praes', 'ta', 're']
mercaturis ['mer', 'ca', 'tu', 'ris']
rem ['rem']
quaerere ['quae', 're', 're']
nisi ['ni', 'si']
tam ['tam']
periculosum ['pe', 'ri', 'cu', 'lo', 'sum']
sit ['sit']
et ['et']
item ['i', 'tem']
foenerari ['foe', 'ne', 'ra', 'ri']
si ['si']
tam ['tam']
honestum ['ho', 'nes', 'tum']
maiores ['ma', 'io', 'res']
nostri ['nos', 'tri']
sic ['sic']
habuerunt ['ha', 'bu', 'e', 'runt']
et ['et']
ita ['i', 'ta']
in ['in']
legibus ['le', 'gi', 'bus']
posiverunt ['po', 'si', 've', 'runt']
furem ['fu', 'rem']
dupli ['du', 'pli']
condemnari ['con', 'dem', 'na', 'ri']
foeneratorem ['foe', 'ne', 'ra', 'to', 'rem']
quadrupli ['qua', 'dru', 'pli']
quanto ['quan', 'to']
peiorem ['peio', 'rem']
civem ['ci', 'vem']
existimarint ['ex', 'is', 'ti', 'ma', 'rint']
foeneratorem ['foe', 'ne', 'ra', 'to', 'rem']
quam ['quam']
furem ['fu', 'rem']
hinc ['hinc']
licet ['li', 'cet']
existimare ['ex', 'is', 'ti', 'ma', 're']
et ['et']
virum ['vi', 'rum']
bonum ['bo', 'num']
quom ['quom']
laudabant ['lau', 'da', 'bant']
ita ['i', 'ta']
laudabant ['lau', 'da', 'bant']
bonum ['bo', 'num']
agricolam ['a', 'gri', 'co', 'lam']
bonum ['bo', 'num']
-que ['-que']
colonum ['co', 'lo', 'num']
amplissime ['am', 'plis', 'si', 'me']
laudari ['lau', 'da', 'ri']
existimabatur ['ex', 'is', 'ti', 'ma', 'ba', 'tur']
qui ['qui']
ita ['i', 'ta']
laudabatur ['lau', 'da', 'ba', 'tur']
mercatorem ['mer', 'ca', 'to', 'rem']
autem ['au', 'tem']
strenuum ['stre', 'nu', 'um']
studiosum ['stu', 'di', 'o', 'sum']
-que ['-que']
rei ['rei']
quaerendae ['quae', 'ren', 'dae']
existimo ['ex', 'is', 'ti', 'mo']
verum ['ve', 'rum']
ut ['ut']
supra ['su', 'pra']
dixi ['di', 'xi']
periculosum ['pe', 'ri', 'cu', 'lo', 'sum']
et ['et']
calamitosum ['ca', 'la', 'mi', 'to', 'sum']
at ['at']
ex ['ex']
agricolis ['a', 'gri', 'co', 'lis']
et ['et']
viri ['vi', 'ri']
fortissimi ['for', 'tis', 'si', 'mi']
et ['et']
milites ['mi', 'li', 'tes']
strenuissimi ['stre', 'nu', 'is', 'si', 'mi']
gignuntur ['gig', 'nun', 'tur']
maxime ['ma', 'xi', 'me']
-que ['-que']
pius ['pi', 'us']
quaestus ['quaes', 'tus']
stabilissimus ['sta', 'bi', 'lis', 'si', 'mus']
-que ['-que']
consequitur ['con', 'se', 'qui', 'tur']
minime ['mi', 'ni', 'me']
-que ['-que']
invidiosus ['in', 'vi', 'di', 'o', 'sus']
minime ['mi', 'ni', 'me']
-que ['-que']
male ['ma', 'le']
cogitantes ['co', 'gi', 'tan', 'tes']
sunt ['sunt']
qui ['qui']
in ['in']
eo ['e', 'o']
studio ['stu', 'di', 'o']
occupati ['oc', 'cu', 'pa', 'ti']
sunt ['sunt']
nunc ['nunc']
ut ['ut']
ad ['ad']
rem ['rem']
redeam ['re', 'de', 'am']
quod ['quod']
promisi ['pro', 'mi', 'si']
institutum ['in', 'sti', 'tu', 'tum']
principium ['prin', 'ci', 'pi', 'um']
hoc ['hoc']
erit ['e', 'rit']
###Markdown
ProsodyThis is a two-step process: first find the long vowels, then scan the actual meter.
###Code
# Use the macronizer
# See http://docs.cltk.org/en/latest/latin.html#macronizer
from cltk.prosody.latin.macronizer import Macronizer
macronizer = Macronizer('tag_ngram_123_backoff')
text = 'Quo usque tandem, O Catilina, abutere nostra patientia?'
scanned_text = macronizer.macronize_text(text)
# Use the scanner
# See http://docs.cltk.org/en/latest/latin.html#prosody-scanning
from cltk.prosody.latin.scanner import Scansion
scanner = Scansion()
prose_text = macronizer.macronize_tags(scanned_text)
print(prose_text)
###Output
[('quō', None, 'quō'), ('usque', 'd--------', 'usque'), ('tandem', 'd--------', 'tandem'), (',', 'u--------', ','), ('ō', None, 'ō'), ('catilīnā', None, 'catilīnā'), (',', 'u--------', ','), ('abūtēre', None, 'abūtēre'), ('nostrā', None, 'nostrā'), ('patientia', 'n-s---fn-', 'patientia'), ('?', None, '?')]
|
ipynb/BDDs.ipynb | ###Markdown
A binary decision diagram is a directed acyclic graph used to represent a Boolean function.They were originally introduced by Lee, and later by Akers.In 1986, Randal Bryant introduced the reduced, ordered BDD (ROBDD).Let's take a look at some basic BDDs.
###Code
# Zero and One
%dotobjs pyeda.boolalg.bdd.BDDZERO, pyeda.boolalg.bdd.BDDONE
# Complement and Variable
%dotobjs ~a, a
###Output
_____no_output_____
###Markdown
A BDD is a full tree of Shannon cofactor expansions of the inputs variables from top (first variable) to bottom (last variable).This is what it would look like if you do not merge isomorphic sub-trees.
###Code
%%dot graph {
a [label=a,shape=circle]
b0 [label=b,shape=circle]
b1 [label=b,shape=circle]
c00 [label=c,shape=circle]
c01 [label=c,shape=circle]
c10 [label=c,shape=circle]
c11 [label=c,shape=circle]
zero000 [label=0,shape=box]
one001 [label=1,shape=box]
one010 [label=1,shape=box]
one011 [label=1,shape=box]
one100 [label=1,shape=box]
one101 [label=1,shape=box]
one110 [label=1,shape=box]
one111 [label=1,shape=box]
a -- b0 [label=0]
a -- b1 [label=1]
b0 -- c00 [label=0]
b0 -- c01 [label=1]
b1 -- c10 [label=0]
b1 -- c11 [label=1]
c00 -- zero000 [label=0]
c00 -- one001 [label=1]
c01 -- one010 [label=0]
c01 -- one011 [label=1]
c10 -- one100 [label=0]
c10 -- one101 [label=1]
c11 -- one110 [label=0]
c11 -- one111 [label=1]
}
###Output
_____no_output_____
###Markdown
Join isomorphic `1` nodes:
###Code
%%dot graph {
a [label=a,shape=circle]
b0 [label=b,shape=circle]
b1 [label=b,shape=circle]
c00 [label=c,shape=circle]
c01 [label=c,shape=circle]
c10 [label=c,shape=circle]
c11 [label=c,shape=circle]
zero [label=0,shape=box]
one [label=1,shape=box]
a -- b0 [label=0,style=dashed]
a -- b1 [label=1]
b0 -- c00 [label=0,style=dashed]
b0 -- c01 [label=1]
b1 -- c10 [label=0,style=dashed]
b1 -- c11 [label=1]
c00 -- zero [label=0,style=dashed]
c00 -- one [label=1]
c01 -- one [label=0,style=dashed]
c01 -- one [label=1]
c10 -- one [label=0,style=dashed]
c10 -- one [label=1]
c11 -- one [label=0,style=dashed]
c11 -- one [label=1]
}
###Output
_____no_output_____
###Markdown
Join isomorphic `c` nodes:
###Code
%%dot graph {
a [label=a,shape=circle]
b0 [label=b,shape=circle]
b1 [label=b,shape=circle]
c00 [label=c,shape=circle]
zero [label=0,shape=box]
one [label=1,shape=box]
a -- b0 [label=0,style=dashed]
a -- b1 [label=1]
b0 -- c00 [label=0,style=dashed]
b0 -- one [label=1]
b1 -- one [label=0,style=dashed]
b1 -- one [label=1]
c00 -- zero [label=0,style=dashed]
c00 -- one [label=1]
}
###Output
_____no_output_____
###Markdown
Join isomorphic `b` nodes:
###Code
%%dot graph {
a [label=a,shape=circle]
b0 [label=b,shape=circle]
c00 [label=c,shape=circle]
zero [label=0,shape=box]
one [label=1,shape=box]
a -- b0 [label=0,style=dashed]
a -- one [label=1]
b0 -- c00 [label=0,style=dashed]
b0 -- one [label=1]
c00 -- zero [label=0,style=dashed]
c00 -- one [label=1]
}
###Output
_____no_output_____
###Markdown
Some examples:
###Code
%dotobj a | b | c
%dotobj a & b & c
%dotobj a ^ b ^ c
# Equal-3
%dotobj ~a & ~b & ~c | a & b & c
# Majority-3
%dotobj a & b | a & c | b & c
# OneHot-3
%dotobj (~a | ~b) & (~a | ~c) & (~b | ~c) & (a | b | c)
###Output
_____no_output_____
###Markdown
BDDs are a *canonical* form.Given an identical ordering of input variables,equivalent functions will always produce identical BDDs.This makes testing for SAT and UNSAT trivial.A function is SAT if its BDD is not $0$,and it is UNSAT if its BDD is not $1$.
###Code
# A full minterm cover is unity.
~a & ~b | ~a & b | a & ~b | a & b
# a full maxterm cover is empty
(~a | ~b) & (~a | b) & (a | ~b) & (a | b)
###Output
_____no_output_____
###Markdown
Formal equivalence checking is also trivial.You can test whether two BDDs are equivalent by using the `equivalent` method,or the Python `is` operator.
###Code
F1 = a ^ b ^ c
F2 = ~a & ~b & c | ~a & b & ~c | a & ~b & ~c | a & b & c
F1.equivalent(F2)
F1 is F2
###Output
_____no_output_____
###Markdown
The downside of BDDs is memory usage.The size of some functions is heavily dependent on the ordering of the input variables,but determining an optimal ordering is known to be a hard problem.Certain functions,no matter how cleverly you order their input variables,will result in an exponentially-sized graph.One example is multiplication.
###Code
X = bddvars('x', 6)
%dotobj X[0] & X[1] | X[2] & X[3] | X[4] & X[5]
%dotobj X[0] & X[3] | X[1] & X[4] | X[2] & X[5]
###Output
_____no_output_____ |
examples/ex6-seir.ipynb | ###Markdown
The SEIR modelThe SEIR model of epidemiology partitions the population into three compartments: susceptibles, S, who can catch the disease; exposed, E, that has caught the infection but is not infectious; infectives, I, who have already caught the disease and infect susceptibles; and recovered individuals, R. Since the disease is assumed not to be fatal, the sum $N=S+I+R$ remains constant. The rate at which the susceptibles get infected is $$\lambda(t) = \frac{\beta I}{N}$$where the parameter $\beta$ is the probability of infection on contact. The infected individuals recover from the disease at a rate $\gamma$. Then, the ordinary differential equations of the SIR model are\begin{align}\dot S &= -\lambda(t)S \\\dot E &= \lambda(t)S - \gamma_E E \\\dot I &= \gamma_E E - \gamma I \\\dot R &= \gamma I \end{align}Here $ 1/\zeta$ can be interpreted as the average incubation period. This example integrates the above equations to obtain what is called the **epidemic curve**: a plot of the number of susceptibles and infectives as a function of time.
###Code
%%capture
## compile PyRoss for this notebook
import os
owd = os.getcwd()
os.chdir('../')
%run setup.py install
os.chdir(owd)
%matplotlib inline
import numpy as np
import pyross
import matplotlib.pyplot as plt
#from matplotlib import rc; rc('text', usetex=True)
M = 1 # the SIR model has no age structure
Ni = 1000*np.ones(M) # so there is only one age group
N = np.sum(Ni) # and the total population is the size of this age group
alpha = 0 # fraction of asymptomatic infectives
beta = 0.2 # infection rate
gIa = 0.1 # recovery rate of asymptomatic infectives
gIs = 0.1 # recovery rate of symptomatic infectives
gE = 0.04 # recovery rate of E
fsa = 1 # the self-isolation parameter
E0 = np.array([0])
Ia0 = np.array([0]) # the SIR model has only one kind of infective
Is0 = np.array([1]) # we take these to be symptomatic
R0 = np.array([0]) # and assume there are no recovered individuals initially
S0 = N-(Ia0+Is0+R0+E0) # so that the initial susceptibles are obtained from S + Ia + Is + R = N
# there is no contact structure
def contactMatrix(t):
return np.identity(M)
# duration of simulation and data file
Tf = 500; Nt=160;
# instantiate model
parameters = {'alpha':alpha, 'beta':beta, 'gIa':gIa, 'gIs':gIs, 'gE':gE, 'fsa':fsa}
model = pyross.models.SEIR(parameters, M, Ni)
# simulate model
data=model.simulate(S0, E0, Ia0, Is0, contactMatrix, Tf, Nt)
# plot the data and obtain the epidemic curve
S = data['X'][:,0].flatten()
E = data['X'][:,1].flatten()
Ia = data['X'][:,2].flatten()
Is = data['X'][:,3].flatten()
t = data['t']
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
plt.fill_between(t, 0, S/N, color="#348ABD", alpha=0.3)
plt.plot(t, S/N, '-', color="#348ABD", label='$S$', lw=4)
plt.plot(t, E/N, '-', color="green", label='$E$', lw=4)
plt.fill_between(t, 0, E/N, color='green', alpha=0.3)
plt.fill_between(t, 0, Is/N, color='#A60628', alpha=0.3)
plt.plot(t, Is/N, '-', color='#A60628', label='$I$', lw=4)
R=N-S-Ia-Is; plt.fill_between(t, 0, R/N, color="dimgrey", alpha=0.3)
plt.plot(t, R/N, '-', color="dimgrey", label='$R$', lw=4)
plt.legend(fontsize=26); plt.grid()
plt.autoscale(enable=True, axis='x', tight=True)
###Output
_____no_output_____ |
hw5/[Bosung_Yang]5_week.ipynb | ###Markdown
전처리(1) RGB(혹은 BGR) 이미지를 HSV로 변환 HSV에서 H(Hue)는 색상정보를 가지므로, hue가 노랑색 혹은 초록색 범위에 있는경우 데이터 삭제
###Code
read_img = cv2.imread('/drive/My Drive/DASH/HW5/data/PerspectiveImages/강원11바19.jpg')
hsv = cv2.cvtColor(read_img,cv2.COLOR_BGR2HSV)
cv2_imshow(read_img)
cv2_imshow(hsv)
h,s,v = cv2.split(hsv)
cv2_imshow(h)
h = cv2.inRange(h,22,42)
cv2_imshow(h)
print(h.mean())
yellow_region = cv2.bitwise_and(hsv,hsv,mask=h)
yellow_region = cv2.cvtColor(yellow_region,cv2.COLOR_HSV2BGR)
cv2_imshow(yellow_region)
read_img = cv2.imread('/drive/My Drive/DASH/HW5/data/PerspectiveImages/01두6167.jpg')
cv2_imshow(read_img)
hsv = cv2.cvtColor(read_img,cv2.COLOR_BGR2HSV)
cv2_imshow(read_img)
cv2_imshow(hsv)
h,s,v = cv2.split(hsv)
cv2_imshow(h)
h = cv2.inRange(h,22,42)
yellow_region = cv2.bitwise_and(hsv,hsv,mask=h)
yellow_region = cv2.cvtColor(yellow_region,cv2.COLOR_HSV2BGR)
cv2_imshow(yellow_region)
read_img = cv2.imread('/drive/My Drive/DASH/HW5/data/PerspectiveImages/서울36마47.jpg')
hsv = cv2.cvtColor(read_img,cv2.COLOR_BGR2HSV)
cv2_imshow(read_img)
cv2_imshow(hsv)
h,s,v = cv2.split(hsv)
cv2_imshow(h)
h = cv2.inRange(h,45,75)
cv2_imshow(h)
yellow_region = cv2.bitwise_and(hsv,hsv,mask=h)
yellow_region = cv2.cvtColor(yellow_region,cv2.COLOR_HSV2BGR)
cv2_imshow(yellow_region)
for file in images:
read_img = cv2.imread(file)
hsv = cv2.cvtColor(read_img,cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
is_yellow = (cv2.inRange(h,22,42).mean() > 50)
is_green = (cv2.inRange(h,45,75).mean() > 50)
if is_yellow or is_green:
continue
file_name = file.split('/')[7]
cv2.imwrite('/drive/My Drive/DASH/HW5/data/preprocessed_data/'+file_name,read_img)
###Output
_____no_output_____
###Markdown
전처리(2) opencv를 이용한 contour추출1. 이미지를 흑백으로 변환2. threshold를 기준으로 이미지 이진화3. 윤곽 좌표추출 및 좌표를 기반으로 각 문자를 파일로 저장
###Code
read_img = cv2.imread('/drive/My Drive/DASH/HW5/data/preprocessed_data/97소2474.jpg')
cv2_imshow(read_img)
read_gray = cv2.cvtColor(read_img,cv2.COLOR_BGR2GRAY)
cv2_imshow(read_gray)
thresh = cv2.threshold(read_gray,127,255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
cv2_imshow(thresh)
contours, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
(x,y,w,h) = cv2.boundingRect(contour)
print(x,y,w,h)
cv2_imshow(read_img[y:y+h,x:x+w])
images = glob.glob(os.path.join('/drive/My Drive/DASH/HW5/data/preprocessed_data/','*'))
import random
target_folder = '/drive/My Drive/DASH/HW5/data/korean_train/'
def saveimage(folder_name,image_arr):
path = os.path.join(target_folder,folder_name)
if not os.path.exists(path):
os.makedirs(path)
p = os.path.join(path,str(random.randint(1,10000))+'.jpg')
cv2.imwrite(p,image_arr)
for file in images:
file_name = file.split('/')[7]
image = cv2.imread(file)
gray_scale = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray_scale,0,255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
contours, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
(x,y,w,h) = cv2.boundingRect(contour)
if x >= 15 and x <=25:
img = image[10:60,x:x+20]
folder_name = file_name[0]
#saveimage(folder_name,img)
elif x >=35 and x<=45:
img = image[10:60,x:x+20]
folder_name = file_name[1]
#saveimage(folder_name,img)
elif x >=55 and x <=65:
img = image[10:60,x:x+20]
folder_name = file_name[2]+file_name[3]
saveimage(folder_name,img)
elif x >=95 and x <=105:
img = image[10:60,x:x+20]
folder_name = file_name[4]
#saveimage(folder_name,img)
elif x >= 114 and x <= 124:
img = image[10:60,x:x+20]
folder_name = file_name[5]
#saveimage(folder_name,img)
elif x >= 130 and x <= 140:
img = image[10:60,x:x+20]
folder_name = file_name[6]
#saveimage(folder_name,img)
elif x >= 155 and x <= 165:
img = image[10:60,x:x+20]
folder_name = file_name[7]
#saveimage(folder_name,img)
###Output
_____no_output_____
###Markdown
모델링1. 간단한 형태의 CNN 모델 사용2. 한국어와 숫자를 구분하여 학습3. 입력받은 번호판에서 문자의 위치를 추출하여 각 문자를 예측함.4. 각 문자에 대한 예측을 합쳐 번호판 번호를 예측함.
###Code
train_datagen = keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
training_set = train_datagen.flow_from_directory('/drive/My Drive/DASH/HW5/data/number_train_3/',target_size=(50,20),class_mode = 'categorical',color_mode='rgb')
training_set_k = train_datagen.flow_from_directory('/drive/My Drive/DASH/HW5/data/korean_train/',target_size=(50,20),class_mode = 'categorical',color_mode='rgb')
model = keras.Sequential()
model.add(keras.layers.Conv2D(32,(5,5),input_shape=(50,20,3),activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model.add(keras.layers.Conv2D(32,(5,5),activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(500,activation = 'relu'))
model.add(keras.layers.Dense(10,activation = 'softmax'))
model.compile(optimizer='adam',loss = 'categorical_crossentropy',metrics = ['accuracy'])
model.fit(training_set,epochs=30)
model_k = keras.Sequential()
model_k.add(keras.layers.Conv2D(32,(5,5),input_shape=(50,20,3),activation = 'relu'))
model_k.add(keras.layers.MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model_k.add(keras.layers.Conv2D(32,(5,5),activation = 'relu'))
model_k.add(keras.layers.MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model_k.add(keras.layers.Flatten())
model_k.add(keras.layers.Dense(500,activation = 'relu'))
model_k.add(keras.layers.Dense(34,activation = 'softmax'))
model_k.compile(optimizer='adam',loss = 'categorical_crossentropy',metrics = ['accuracy'])
model_k.fit(training_set_k,epochs=20)
d = training_set.class_indices
dk = training_set_k.class_indices
dict_class = {v:k for k,v in d.items()}
dict_class_k = {v:k for k,v in dk.items()}
files = glob.glob(os.path.join('/drive/My Drive/DASH/HW5/data/test/','*'))
for file in files:
image = cv2.imread(file)
gray_scale = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray_scale,0,255,cv2.THRESH_BINARY_INV|cv2.THRESH_OTSU)[1]
contours, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
(x,y,w,h) = cv2.boundingRect(contour)
if x >= 15 and x <=25:
img = image[10:60,x:x+20]
img = np.expand_dims(img,axis=0)
pred = model.predict_classes(img)
letter1 = dict_class[pred[0]]
elif x >=35 and x<=45:
img = image[10:60,x:x+20]
img = np.expand_dims(img,axis=0)
pred = model.predict_classes(img)
letter2 = dict_class[pred[0]]
elif x >=55 and x <=65:
img = image[10:60,x:x+20]
img = np.expand_dims(img,axis=0)
pred = model_k.predict_classes(img)
letter3 = dict_class_k[pred[0]]
elif x >=100 and x <=110:
img = image[10:60,x:x+20]
img = np.expand_dims(img,axis=0)
pred = model.predict_classes(img)
letter4 = dict_class[pred[0]]
elif x >= 120 and x <= 130:
img = image[10:60,x:x+20]
img = np.expand_dims(img,axis=0)
pred = model.predict_classes(img)
letter5 = dict_class[pred[0]]
elif x >= 140 and x <= 150:
img = image[10:60,x:x+20]
img = np.expand_dims(img,axis=0)
pred = model.predict_classes(img)
letter6 = dict_class[pred[0]]
elif x >= 160 and x <= 170:
img = image[10:60,x:x+20]
img = np.expand_dims(img,axis=0)
pred = model.predict_classes(img)
letter7 = dict_class[pred[0]]
predict_letter = letter1+letter2+letter3+letter4+letter5+letter6+letter7
cv2_imshow(image)
print(predict_letter)
###Output
_____no_output_____ |
Lectures/Lecture1_Datahub/Intro_to_Career_Exploration.ipynb | ###Markdown
Lecture 1: Introduction to Datahub and Jupyter 20 Feb 2021 Table Of Contents* [Introduction](section1)* [What will we learn?](section2)* [Homework and Submissions](section3) Hosted by and maintained by the [Student Association for Applied Statistics (SAAS)](https://saas.berkeley.edu).  IntroductionHello! Welcome to Career Exploration Fall 2020!This is just an introductory notebook for practice working with datahub and discussing the semester schedule. Datahub is a fantastic resource as it allows us to utilize python and common packages without needing to install a bunch of stuff and having that break.Run the code chunk below by clicking on it and pressing `shift enter` (or `shift return`) on mac. These are all common packages we will import throughout the semesters.
###Code
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Ordinarily you would need to install python as well as all the packages and then sometimes stuff doesn't work. This usually causes many problems with differing versions and installation issues. Datahub bypasses all of these by providing a environment you can use online! Steps to download from Slack and unzip on Datahub. 1. Make sure you are in the slack workspace, navigate to the **career-exploration-spring2020** channel 2. Download the LectureX.zip file1. Open datahub at http://datahub.berkeley.edu/ and log in with your berkeley account2. Click upload at the top right3. Upload LectureX.zip (X represents the lecture number, for example Lecture1.zip)4. Select 'new' at the top right of the datahub screen, and select terminal from the drop down5. Enter "unzip LectureX.zip" * `unzip LectureX.zip`6. Open the LectureX folder and open the ipynb file inside the LectureX folderOur main source of file sharing will be uploading to slack. Remember to upload the entire zip file to Datahub and unzip. What will we learn?This semester will go over many topics on a relatively high level. We begin with introducing jupyter notebooks (what you are reading from right now!) and use these to teach most of our lectures. Jupyter notebooks are incredibly useful as they allow you to run separate chunks of code at a time, without having to run the entire program at once.We aim to go through the following topics for the semester.DateLecture2/20L1 Logistics and Datahub2/27L2 Python3/6L3 Numpy/Pandas + Visualizations3/13L4 Data Cleaning and Exploratory Data Analysis3/20L5 Intro to Linear Algebra and Linear Regression4/3L6 Intro to Machine Learning4/10L7 Bias Variance, Regularization4/17L8 Decision Trees, Random Forest, Boosting4/24L9 Neural Networks5/1L10 Advanced TopicsAs you can see, the semester is packed full of various concepts, from statistical ideas such as bias and variance to machine learning concepts like neural networks and decision trees.The semester is structured so that you will be able to accumulate foundational skills, learn more advanced concepts, and apply them to a final Kaggle competition. The course material is being written by our lovely Education committee! You will get to meet them over the course of the semester as we are rotating lecturers.This schedule is quite ambitious and fast paced as it aims to cover a very large amount of material. **Please let us know if you ever have feedback, have questions, or you are just looking for some more help! We are all happy to help out. You can always reach us over slack.****This material is hard!**We also hold many workshops and socials over the semester! We hope that you are all able to come participate and have a great time! Project Checkpoint Submissions This semester we are going to split up the Final Project into several checkpoints as opposed to having weekly homework assignments. This helps create a fun and low stress way of staying on top of the material! In the cells below, write your name, major, a fun fact about yourself, a short game, and a quick survey. Make sure to hit Save (File > Save and Checkpoint) or Ctrl/Command-S after you've finished writing. **Name**: **Major**: **Fun Fact**: **In the cell below, write a number 1 to 100 inclusive.** 0 Run the cell below to make sure everything runs fine.
###Code
func = lambda x: 4*x+2
samples = 100
data_range = [0, 1]
x = np.random.uniform(data_range[0], data_range[1], (samples, 1))
y = func(x) + np.random.normal(scale=3, size=(samples, 1))
model = LinearRegression().fit(x, y)
predictions = model.predict(np.array(data_range).reshape(-1, 1))
fig, ax = plt.subplots(figsize=(12, 8))
plt.scatter(x, y)
plt.plot(data_range, list(map(func, data_range)), label="Truth")
plt.plot(data_range, predictions, label="Prediction")
plt.xlabel("X")
plt.ylabel("Y")
plt.title("Linear regression")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Lecture 1: Introduction to Datahub and Jupyter 3 Oct 2020 Table Of Contents* [Introduction](section1)* [What will we learn?](section2)* [Homework and Submissions](section3) Hosted by and maintained by the [Student Association for Applied Statistics (SAAS)](https://saas.berkeley.edu).  IntroductionHello! Welcome to Career Exploration Fall 2020!This is just an introductory notebook for practice working with datahub and discussing the semester schedule. Datahub is a fantastic resource as it allows us to utilize python and common packages without needing to install a bunch of stuff and having that break.Run the code chunk below by clicking on it and pressing `shift enter` (or `shift return`) on mac. These are all common packages we will import throughout the semesters.
###Code
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Ordinarily you would need to install python as well as all the packages and then sometimes stuff doesn't work. This usually causes many problems with differing versions and installation issues. Datahub bypasses all of these by providing a environment you can use online! Steps to download from Slack and unzip on Datahub. 1. Make sure you are in the slack workspace, navigate to the **career-exploration-spring2020** channel 2. Download the LectureX.zip file1. Open datahub at http://datahub.berkeley.edu/ and log in with your berkeley account2. Click upload at the top right3. Upload LectureX.zip (X represents the lecture number, for example Lecture1.zip)4. Select 'new' at the top right of the datahub screen, and select terminal from the drop down5. Enter "unzip LectureX.zip" * `unzip LectureX.zip`6. Open the LectureX folder and open the ipynb file inside the LectureX folderOur main source of file sharing will be uploading to slack. Remember to upload the entire zip file to Datahub and unzip. What will we learn?This semester will go over many topics on a relatively high level. We begin with introducing jupyter notebooks (what you are reading from right now!) and use these to teach most of our lectures. Jupyter notebooks are incredibly useful as they allow you to run separate chunks of code at a time, without having to run the entire program at once.We aim to go through the following topics for the semester.DateLecture10/3L1 Logistics and Datahub10/10L2 Python10/17L3 Numpy/Pandas; L4 Visualizations10/24L5 Intro to Linear Algebra and Linear Regression10/31L6 Intro to Probability11/7L7 Intro to Machine Learning11/14L8 Bias Variance &Regularization11/21L9 Kaggle 1 Data Cleaning and Exploratory Data Analysis11/28THANKSGIVING12/5L10 Kaggle 2 Decision Trees; Random Forest; BoostingAs you can see, the semester is packed full of various concepts, from statistical ideas such as bias and variance to machine learning concepts like neural networks and decision trees.The semester is structured so that you will be able to accumulate foundational skills, learn more advanced concepts, and apply them to a final Kaggle competition. The course material is being written by our lovely Education committee! You will get to meet them over the course of the semester as we are rotating lecturers.This schedule is quite ambitious and fast paced as it aims to cover a very large amount of material. **Please let us know if you ever have feedback, have questions, or you are just looking for some more help! We are all happy to help out. You can always reach us over slack.****This material is hard!**We also hold many workshops and socials over the semester! We hope that you are all able to come participate and have a great time! Project Checkpoint Submissions This semester we are going to split up the Intermediate and Final Projects into weekly checkpoints as opposed to having weekly homework assignments. This helps create a fun and low stress way of staying on top of the material! We'll also be tracking your progress along the way! At the bottom of every Jupyter Notebook you'll work with this semester, there will be a cell like the one below. When you run it (shift-enter), it will submit what you've written in specific cells in the notebook (could be exercises, responses, polls) so we can give you feedback on what you've learned so far! In the cells below, write your name, major, a fun fact about yourself, a short game, and a quick survey. Make sure to hit Save (File > Save and Checkpoint) or Ctrl/Command-S after you've finished writing. **Make sure you do not delete the cells for submission or you will need to redownload from slack!** **Name**: Ronnie **Major**: Computer Science **Fun Fact**: I MISS SHENGKEE :( **In the cell below, write a number 1 to 100 inclusive.** 0 **Save your notebook and then submit by pressing shift enter on the cell below!**
###Code
from submit import create_and_submit
create_and_submit()
###Output
_____no_output_____
###Markdown
Run the cell below to make sure everything runs fine.
###Code
func = lambda x: 4*x+2
samples = 100
data_range = [0, 1]
x = np.random.uniform(data_range[0], data_range[1], (samples, 1))
y = func(x) + np.random.normal(scale=3, size=(samples, 1))
model = LinearRegression().fit(x, y)
predictions = model.predict(np.array(data_range).reshape(-1, 1))
fig, ax = plt.subplots(figsize=(12, 8))
plt.scatter(x, y)
plt.plot(data_range, list(map(func, data_range)), label="Truth")
plt.plot(data_range, predictions, label="Prediction")
plt.xlabel("X")
plt.ylabel("Y")
plt.title("Linear regression")
plt.legend()
plt.show()
###Output
_____no_output_____ |
exercises/.ipynb_checkpoints/Hypothesis Testing-checkpoint.ipynb | ###Markdown
**Hypothesis Testing*****Miguel Ángel Vélez Guerra*** **Tabla de contenido**
###Code
%%javascript
// Script to generate table of contents
$.getScript('../resources/table_of_contents.js')
###Output
_____no_output_____
###Markdown
Imports
###Code
#-------Importing from other folder------#
import sys
sys.path.insert(0, "../resources/")
import mstats as ms
#-----------Miguel's statistics----------#
import scipy.stats as ss
import numpy as np
###Output
_____no_output_____
###Markdown
1. Pruebas de hipótesis de 2 colas para la media poblacional en muestras grandesSe supone que el embotellador desea probar la hipótesis de que la media poblacional es de 16 onzas y selecciona un nivel de significancia del 5%. Debido a que se plantea la hipótesis de que μ = 16.Si el embotellador selecciona una muestra de n = 50 botellas con una media de 16.357 onzas y una desviación estándar de 0.866 onzas.
###Code
mu_embotellador = 16 # Hipótesis nula de la media poblacional
x__embotellador = 16.357 # Media muestral
s_embotellador = 0.866 # Desviación estándar muestral
n_embotellador = 50 # Tamaño de la muestra
alpha_embotellador = 0.05 # Nivel de significancia
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** μ = 16 **Ha:** μ ≠ 16 **Paso 2**: Nivel de significancia
###Code
alpha_embotellador
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
crit_embotellador = ms.hypothesis.crit_val_norm(alpha_embotellador, 'two') # Valores críticos
crit_embotellador
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (Z)
###Code
z_embotellador = ms.generals.get_z(x__embotellador, mu_embotellador, s_embotellador, n=n_embotellador)
z_embotellador
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.graph.hypothesis(ss.norm, z_embotellador, alpha_embotellador, "two")
ms.hypothesis.reject_h0(crit_embotellador, z_embotellador, 'two')
###Output
_____no_output_____
###Markdown
**SI se rechaza la hipótesis nula**, teniendo en cuenta que el valor del estadístico de prueba *2.91497830119627* si es mayor o menor que los valores críticos *-1.959963984540054, 1.959963984540054*. **Paso 6**: Conclusión Se puede afirmar con un nivel de significancia del *5%* que que el peso promedio de las botellas es **diferente** de 16 onzas. 2. Pruebas de hipótesis de 1 cola para la media poblacional en muestras grandesEn una reunión informativa para una oficina corporativa, el gerente del hotel Embassy Suites en Atlanta, reportó que el número promedio de habitaciones alquiladas por noches es de por lo menos 212. Es decir μ > 212. Uno de los funcionarios operativos considera que esta cifra puede estar algo subestimada. Una muestra de 150 noches produce una media de 201.3 habitaciones y una desviación estándar de 45.5 habitaciones. Si estos resultados sugieren que el gerente ha "inflado" su reporte, será amonestado severamente. A un nivel de confianza del 1%. ¿Cuál es el destino del gerente?
###Code
mu_habitaciones = 212 # Hipótesis nula de la media poblacional
x__habitaciones = 201.3 # Media muestral
s_habitaciones = 45.5 # Desviación estándar muestral
n_habitaciones = 150 # Tamaño de la muestra
alpha_habitaciones = 0.01 # Nivel de significancia
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** μ = 212 **Ha:** μ < 212 **Paso 2**: Nivel de significancia
###Code
alpha_habitaciones
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
crit_habitaciones = ms.hypothesis.crit_val_norm(alpha_habitaciones, 'left')
crit_habitaciones
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (Z)
###Code
z_habitaciones = ms.generals.get_z(x__habitaciones, mu_habitaciones, s_habitaciones, n=n_habitaciones)
z_habitaciones
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.hypothesis.reject_h0(crit_habitaciones, z_habitaciones, 'left')
###Output
_____no_output_____
###Markdown
**SI se rechaza la hipótesis nula** teniendo en cuenta que el valor del estadístico de prueba *-2.8801692579977995* es menor que el valor crítico *-2.3263478740408408*. **Paso 6**: Conclusión Con un nivel de significancia del *1%* podemos afirmar que el número promedio de habitaciones alquiladas por noche es **menor** de 212 habitaciones.Por lo que podemos concluir, que el gerente será amonestado gravemente por "inflar" su reporte. 3. Valor p para prueba de 1 colaChuck Cash es el jefe de personal de una empresa. A partir de un breve análisis de los registros de los empleados, Chuck considera que los empleados tienen un promedio de más de 31000 USD en sus cuentas de pensiones. Al tomar como muestra 100 empleados, Chuck encuentra una media de 31366, con s = 1894. Se supone que Chuck desea calcular el valor p relacionado con esta prueba de cola a la derecha.
###Code
mu_empleados = 31000 # Hipótesis nula de la media poblacional
n_empleados = 100 # Tamaño de la muestra
x__empleados = 31366 # Promedio muestral
s_empleados = 1894 # Desviación estándar muestral
z_empleados = ms.generals.get_z(x__empleados, mu_empleados, s_empleados, n=n_empleados)
z_empleados
p_empleados = ms.hypothesis.get_p(z_empleados, 'right')
p_empleados
###Output
_____no_output_____
###Markdown
**R/** El mínimo nivel de significancia que puede tener Chuck es de **2.66%** para poder afirmar que los empleados tienen un promedio **de más de** 31000 USD en sus cuentas de pensiones. 4. Valor p para una prueba de 2 colasChuck Cash también sospecha que los empleados invierten un promedio de 100 UDD mensuales en el plan de opción de compra de acciones de la compañía. Al tomar como muestra 100 empleados, Chuck descubre una media de 106.81 USD con una desviación estándar de 36.60 USD. Ahora desea determinar el valor p relacionado con la prueba de hipótesis.
###Code
mu_acciones = 100 # Hipótesis nula de la media poblacional
n_acciones = 100 # Tamaño de la muestra
x__acciones = 106.81 # Promedio muestral
s_acciones = 36.6 # Desviación estándar muestral
z_acciones = ms.generals.get_z(x__acciones, mu_acciones, s_acciones, n=n_acciones)
z_acciones
p_acciones = ms.hypothesis.get_p(z_acciones, 'two')
p_acciones
###Output
_____no_output_____
###Markdown
**R/** El mínimo nivel de significancia que puede tomar Chuck para determinar que los empleados invierten un promedio **diferente** de 100 USD mensuales en el plan de opción de compra de acciones de la compañía es de **6.27%** 5. Pruebas de hipótesis de 2 colas para la media poblacional en muestras pequeñasLos estudiantes de una clase de estadística en State University cuestionan la afirmación de que McDonalds coloca 0.25 libras de carne en sus “Habueguesas de cuarto de libra”. Algunos estudiantes argumentan que en realidad se utiliza más, mientras otros insisten que menos. Para probar la afirmación publicitaria que el peso promedio es es de 0.25 libras, cada estudiante compra una hamburguesa de cuarto y la lleva a clase, en donde la pesan en una balanza suministrada por el instructor. Los resultados de la muestra son una media de 0.22 libras y una desviación estándar de 0.09. Si hay 25 estudiantes en clase, ¿a que conclusión llegarían a un nivel de significancia del 5%?
###Code
mu_mcd = 0.25 # Hipótesis nula de la media poblacional
x__mcd = 0.22 # Promedio muestral
s_mcd = 0.09 # Desviación estándar muestral
n_mcd = 25 # Tamaño de la muestra
alpha_mcd = 0.05 # Nivel de significancia
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** μ = 0.25 **Ha:** μ ≠ 0.25 **Paso 2**: Nivel de significancia
###Code
alpha_mcd
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
df_mcd = n_mcd - 1
crit_mcd = ms.hypothesis.crit_val_t(df_mcd, alpha_mcd, "two")
crit_mcd
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (T)
###Code
t_mcd = ms.generals.get_t(x__mcd, mu_mcd, s_mcd, n_mcd)
t_mcd
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.hypothesis.reject_h0(crit_mcd, t_mcd, 'two')
###Output
_____no_output_____
###Markdown
**NO se rechaza la hipótesis nula**, teniendo en cuenta que el valor del estadístico de prueba *-1.6666666666666667* no es menor ni mayor que los valores críticos *-2.0638985616280205, 2.0638985616280205*. **Paso 6**: Conclusión Teniendo en cuenta el nivel de significancia de *5%* se puede concluir que no hay argumentos suficientes para negar que McDonalds coloca **exactamente** 0.25 libras de carne en sus "Hamburguesas de cuarto de libra" 6. Pruebas de hipótesis de 1 cola para la media poblacional en muestras pequeñasThe American Kennel Club (AKC) reportó en su publicación Estadounidenses Propietarios de Perros (abril 1997) que los perros cocker spaniels de un año de edad deberían pesar “un poco más de 40 libras si han recibido una notrición paropiada”. Para probar la hipótesis Hill’s, productor de alimentos para la dieta de perros, pesa 15 perros cockers de un año de edad y descubre una media de 41.17 libras, con s = 4.71 libras. Utilice un valor α del 1%.
###Code
mu_perros = 40 # Hipótesis nula de la media poblacional
n_perros = 15 # Tamaño de la muestra
x__perros = 41.17 # Promedio muestral
s_perros = 4.71 # Desviación estándar muestral
alpha_perros = 0.01 # Nivel de significancia
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** μ = 40 **Ha:** μ > 40 **Paso 2**: Nivel de significancia
###Code
alpha_perros
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
df_perros = n_perros - 1
crit_perros = ms.hypothesis.crit_val_t(df_perros, alpha_perros, 'right')
crit_perros
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (T)
###Code
t_perros = ms.generals.get_t(x__perros, mu_perros, s_perros, n_perros)
t_perros
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.hypothesis.reject_h0(crit_perros, t_perros, 'right')
###Output
_____no_output_____
###Markdown
**NO se rechaza la nula**, teniendo en cuenta que el estadístico de prueba *0.9620786656184043* no es mayor que el valor crítico *2.624494067560231*. **Paso 6**: Conclusión Si tenemos un nivel de significancia de *1%*, podemos afirmar que el peso promedio de los perros cocker spaniels de un año de edad es **menor o igual** a 40 libras. 7. Pruebas de hipótesis de 2 colas para la proporción poblacionalComo director de operaciones de mercadeo para una gran cadena minorista, usted considera que el 60% de los clientes de la firma se han graduado de la universidad. Usted intenta establecer una importante política respecto a la estructura de precios sobre esta proporción. Una muestra de 800 clientes revela que 492 clientes tienen grados universitarios. A un nivel del 5%, ¿qué puede concluir sobre la proporción de todos los clientes que se han graduado de la universidad?
###Code
pi_graduados = 0.6 # Hipótesis nula de la proporción poblacional
n_graduados = 800 # Tamaño de la muestra
p_graduados = 492/800 # Proporción muestral
alpha_graduados = 0.05 # Nivel de significancia
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** π = 0.6 **Ha:** π ≠ 0.6 **Paso 2**: Nivel de significancia
###Code
alpha_graduados
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
crit_graduados = ms.hypothesis.crit_val_norm(alpha_graduados, 'two')
crit_graduados
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (Z)
###Code
z_graduados = ms.generals.get_z_prop(p_graduados, pi_graduados, n_graduados)
z_graduados
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.hypothesis.reject_h0(crit_graduados, z_graduados, 'two')
###Output
_____no_output_____
###Markdown
**NO se rechaza la hipótesis nula**, teniendo en cuenta que el estadístico de prueba *0.8660254037844394* no es menor ni mayor que los valores críticos *-1.959963984540054, 1.959963984540054*. **Paso 6**: Conclusión A un nivel de significancia del *5%*, podemos concluir que la proporción de todos los clientes que se han graduado de la universidad **no** es **diferente** de 0.6. 8. Pruebas de hipótesis de 1 cola para la proporción poblacionalEl CEO de una gran firma manufacturera debe garantizar que por lo menos 75% de sus empleados ha concluido un curso avanzado de capacitación. De los 1200 empleados seleccionados aleatoriamente, 875 lo han hecho. El CEO registra su asistencia para probar esta hipótesis y calcular el valor de p. A un nivel de significancia del 5%, ¿qué conclusiones incluye usted en su reporte?
###Code
pi_curso = 0.75 # Hipótesis nula de la proporción poblacional
n_curso = 1200 # Tamaño de la muestra
p_curso = 875/1200 # Proporción muestral
alpha_curso = 0.05 # Nivel de significancia
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** π = 0.75 **Ha:** π < 0.75 **Paso 2**: Nivel de significancia
###Code
alpha_curso
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
crit_curso = ms.hypothesis.crit_val_norm(alpha_curso, 'left')
crit_curso
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (Z)
###Code
z_curso = ms.generals.get_z_prop(p_curso, pi_curso, n_curso)
z_curso
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.hypothesis.reject_h0(crit_curso, z_curso, 'left')
###Output
_____no_output_____
###Markdown
**SI se rechaza la hipótesis nula** teniendo en cuenta que el estadístico de prueba *-1.6666666666666696* es menor que el valor crítico *-1.6448536269514722* **Paso 6**: Conclusión Con un nivel de significancia del *5%*, podemos afirmar que en la empresa, **menos** del *75%* de los empleados han concluido un curso avanzado de capacitación.Por lo que el CEO debe tomar medidas para incrementar la proporción de empleados con el curso avanzado de capacitación. **Valor p**
###Code
pvalue_curso = ms.hypothesis.get_p(z_curso, 'left')
pvalue_curso
###Output
_____no_output_____
###Markdown
**R/** El mínimo nivel de significancia que puede tomar el CEO para no rechazar que **por lo menos** el *75%* de sus empleados tienen el curso de capacitación es de *4.77%*.Pero como el nivel de significancia que se tomó es de *5%*, y es mayor al valor-p, se puede **negar** la afirmación de que por lo menos el 75% de sus empleados tienen el curso de capacitación. 9. Pruebas de hipótesis para la diferencia entre 2 poblaciones en muestras grandesWeaver Ridge Golf Course desea ver si el tiempo promedio que requieren los hombres para jugar los 18 hoyos es diferente al de las mujeres. Se mide el tiempo para 50 partidos de hombres y 45 de mujeres y se obtiene:Hombres:X_ = 3.5 horasS = 0.9 horasMujeres:X_ = 4.9 horasS = 1.5 horasUtilice un nivel de significancia del 5%
###Code
n_h = 50 # Tamaño de la muestra 1
x_h = 3.5 # Promedio de la muestra 1
s_h = 0.9 # Desviación estándar de la muestra 1
n_m = 45 # Tamaño de la muestra 2
x_m = 4.9 # Promedio de la muestra 2
s_m = 1.5 # Desviación estándar de la muestra 2
alpha_golf = 0.05 # Nivel de significancia
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** μh = μm **Ha:** μh ≠ μm **Paso 2**: Nivel de significancia
###Code
alpha_golf
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
crit_golf = ms.hypothesis.crit_val_norm(alpha_golf, 'two')
crit_golf
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (Z)
###Code
z_golf = ms.generals.get_z_2p(x_h, x_m, s_h, s_m, n_h, n_m)
z_golf
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.hypothesis.reject_h0(crit_golf, z_golf, 'two')
###Output
_____no_output_____
###Markdown
**SI se rechaza la hipótesis nula**, teniendo en cuenta que el estadístico de prueba *-5.4412545203553035* es mayor o menor que los valores críticos *-1.959963984540054, 1.959963984540054* **Paso 6**: Conclusión Con un nivel de significancia del *5%*, podemos afirmar que el tiempo promedio que se demoran los hombres para jugar los 18 hoyos es **diferente** al de las mujeres.También, cómo la zona de rechazo fue la izquierda, se puede concluir que el tiempo promedio que se demoran las mujeres para jugar los 18 hoyos es **mayor** que el de los hombres ***(μ_m > μ_h)*** 10. Pruebas de hipótesis para la diferencia entre 2 poblaciones en muestras pequeñas con varianzas iguales**Ejercicio 9.2** Las negociaciones salariales entre su empresa y el sindicato de sus trabajadores están a punto de romperse. Existe un desacuerdo considerable sobre el nivel salarial promedio de los trabajadores en la planta de Atlanta y en la planta de Virginia. Los salarios fueron fijados por el antiguo acuerdo laboral de hace tres años y se basan estrictamente en la antigüedad. Debido a que los salarios están controlados muy de cerca por el contrato laboral, se asume que la variación en los salarios es la misma en ambas plantas y que los salarios están distribuidos normalmente. Sin embargo, se siente que existe una diferencia entre los niveles salariales promedio debido a los patrones de antigüedad diferentes en las dos plantas.El negociador laboral que representa a la gerencia desea que usted desarrolle un intervalo de confianza del 98% para estimar la diferencia entre los niveles salariales promedio. Si existe una diferencia en las medias, deben hacerse ajustes para hacer que los salarios más bajos alcancen el nivel de los más altos. Dados los siguientes datos, ¿Qué ajustes se requieren, si es el caso?-----Regresando al ejercicio 9.2, un estimado del intervalo del 98% de la diferencia en los salarios promedio fué de: *-5.09 < μ1 - μ2 < 9.15* . Si en lugar de un estimado por intervalos de confianza, se hubiera querido realizar una prueba de hipótesis de que las medias poblacionales eran iguales.
###Code
n_a = 23 # Tamaño de la muestra 1
x_a = 17.53 # Promedio de la muestra 1
s_a = 92.10**0.5 # Desviación estándar de la muestra 1
n_v = 19 # Tamaño de la muestra 2
x_v = 15.5 # Promedio de la muestra 2
s_v = 87.19**0.5 # Desviación estándar de la muestra 2
alpha_plantas = 0.02 # Nivel de significancia
var_plantas = True # ¿Varianzas poblacionales iguales?
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** μh = μm **Ha:** μh ≠ μm **Paso 2**: Nivel de significancia
###Code
alpha_plantas
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
df_plantas = n_a + n_v - 2 # Grados de libertad cuando las varianzas poblacionales son iguales
crit_plantas = ms.hypothesis.crit_val_t(df_plantas, alpha_plantas, 'two')
crit_plantas
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (T)
###Code
t_plantas = ms.generals.get_t_2p(x_a, x_v, s_a, s_v, n_a, n_v, var_plantas)
t_plantas
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.graph.hypothesis(ss.t(df_plantas), t_plantas, alpha_plantas, 'two')
ms.hypothesis.reject_h0(crit_plantas, t_plantas, 'two')
###Output
_____no_output_____
###Markdown
**NO se rechaza la hipótesis nula**, teniendo en cuenta que el valor del estadístico de prueba *0.6906455424446802* no es menor ni mayor que los valores críticos *-2.4232567793348565, 2.4232567793348565*. **Paso 6**: Conclusión Con un nivel de significancia del *2%*, se puede afirmar que la diferencia entre los salarios de los trabajadores de la planta de Atlanta y los de la planta de Virginia **no son diferentes**.Por lo que se puede concluir que no son necesarios ningunos ajustes. 11. Pruebas de hipótesis para la diferencia entre 2 poblaciones en muestras pequeñas con varianzas diferentes**9.3** Acme Ltd. Vende dos tipos de amortiguadores de caucho para coches de bebés. Las pruebas de desgaste para medir la durabilidad revelaron que 13 amortiguadores del tipo 1 duraron un promedio de 11.3 semanas, con una desviación estándar de 3.5 semanas; mientras 10 del tipo 2 duraron un promedio de 7.5 semanas, con una desviación estándar de 2.7 semanas. El tipo 1 es más costoso para fabricar y el CEO de Acme no desea utilizarlo a menos que tenga un promedio de duración de por lo menos ocho semanas más que el tipo 2. El CEO tolerará una probabilidad de error de sólo el 2%. No existe evidencia que sugiera que las varianzas de la duración de los dos productos sean iguales.-----En el ejercicio 9.3, un intervalo del 98% para la diferencia en la durabilidad promedio de los 2 tipos de amortiguadores de caucho para coches de bebé, se estimó que era *0.5 < μ1 - μ2 < 7.1*. Si en lugar de un estimado por intervalos de confianza, se hubiera querido realizar una prueba de hipótesis de que las medias poblacionales eran iguales.
###Code
n_am1 = 13 # Tamaño de la muestra 1
x_am1 = 11.3 # Promedio de la muestra 1
s_am1 = 3.5 # Desviación estándar de la muestra 1
n_am2 = 10 # Tamaño de la muestra 2
x_am2 = 7.5 # Promedio de la muestra 2
s_am2 = 2.7 # Desviación estándar de la muestra 2
alpha_am = 0.02 # Nivel de significancia
var_am = False # ¿Varianzas poblacionales iguales?
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** μam1 = μam2 **Ha:** μam1 ≠ μam2 **Paso 2**: Nivel de significancia
###Code
alpha_am
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
df_am = ms.generals.get_df_var(n_am1, n_am2, s_am1, s_am2) # Grados de libertad cuando las varianzas poblacionales son diferentes
crit_am = ms.hypothesis.crit_val_t(df_am, alpha_am, 'two')
crit_am
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (T)
###Code
t_am = ms.generals.get_t_2p(x_am1, x_am2, s_am1, s_am2, n_am1, n_am2, var_am)
t_am
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.hypothesis.reject_h0(crit_am, t_am, 'two')
###Output
_____no_output_____
###Markdown
**SI se rechaza la hipótesis nula**, teniendo en cuenta que el estadístico de prueba *2.9393776700394625*, es mayor o menor que los valores críticos *-2.517696736682547, 2.517696736682547*. **Paso 6**: Conclusión A un nivel de significancia del *2%*, podemos afirmar que el promedio de duración de los 2 tipos de amortiguadores **no son iguales**.Y también se puede concluir que, como la zona de rechazo fue la zona derecha, se puede afirmar que el promedio de duración del amortiguador de tipo 1, es **mayor** que el amortiguador de tipo 2 **(μam1 > μam2)** 12. Pruebas de hipótesis para la diferencia entre 2 poblaciones en muestras pareadas**Ejercicio 9.4.** Vicki Peplow, directora regional de pagos de asistencia médica para Aetna Insurance, constató que dos hospitales diferentes parecían cobrar cantidades ampliamente diferentes por el mismo procedimiento médico. Ella recolectó observaciones sobre costos de facturación para 15 procedimientos idénticos en cada hospital, y construyó un intervalo de confianza del 95% para la diferencia entre los costos promedio presentados por cada hospital. Se utilizaron muestras pareadas porque Vicki corrigió todos los demás factores relevantes distintos al costo. Si existe una diferencia, la señora Peplow planea reportar este asunto a las autoridades de asistencia médica Medicare. ¿Deberá ella presentar el informe?---En el ejercicio 9.4, Vicky Peplow preparó un estimado de intervalo del 95% para la diferencia en costos para procedimientos idénticos en los 2 hospitales. El resultado fué *-146.33 < μ1 - μ2 < 28.47*. Si en lugar de un estimado por intervalos de confianza, se hubiera querido realizar una prueba de hipótesis de que las medias poblacionales eran iguales.
###Code
n_hospital = 15 # Tamaño de las muestras
cond1_hospital = [465, 532, 426, 543, 587, 537, 598, 698, 378, 376, 524, 387, 429, 398, 412] # Condición inicial de la muestra
cond2_hospital = [512, 654, 453, 521, 632, 418, 587, 376, 529, 517, 476, 519, 587, 639, 754] # Condición final de la muestra
alpha_hospital = 0.05 # Nivel de significancia
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** μh1 = μh2 **Ha:** μh1 ≠ μh2 **Paso 2**: Nivel de significancia
###Code
alpha_hospital
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
df_hospital = n_hospital - 1
crit_hospital = ms.hypothesis.crit_val_t(df_hospital, alpha_hospital, 'two')
crit_hospital
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (T)
###Code
t_hospital = ms.generals.get_d_pair(n_hospital, cond1_hospital, cond2_hospital)
t_hospital
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.hypothesis.reject_h0(crit_hospital, t_hospital, 'two')
###Output
_____no_output_____
###Markdown
**NO se rechaza la hipótesis nula**, teniendo en cuenta que el estadístico de prueba *-1.44642249848984* no es menor ni mayor que los valores críticos *-2.1447866879169273, 2.1447866879169273* **Paso 6**: Conclusión Con un nivel de significancia del *5%* se puede concluir que la diferencia entre los costos promedio de los procedimientos médicos del hospital 1 y el hospital 2 **no son diferentes**.Entonces, Vicki no debería presentar el informe a las autoridades de asistencia médica Medicare, pues **no existe una diferencia significativa** entre los costos promedio de los procedimientos médicos del hospital 1 y el hospital 2 que justifique el reporte. 13. Pruebas de hipótesis para la diferencia entre 2 proporciones poblacionalesUn minorista desea probar la hipótesis nula de que la proporción de sus clientes masculinos, quienes compran a crédito, es igual a la proporción de mujeres que utilizan el crédito. Él selecciona 100 clientes hombres y encuentra que 57 compraron a crédito, mientras que 52 de 110 mujeres lo hicieron. Utilice α = 1%.
###Code
n_hombres = 100 # Tamaño de la muestra 1
p_hombres = 57/100 # Proporción de la muestra 1
n_mujeres = 110 # Tamaño de la muestra 2
p_mujeres = 52/110 # Proporción de la muestra 2
alpha_credito = 0.01 # Nivel de significancia
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** πh = πm **Ha:** πh ≠ πm **Paso 2**: Nivel de significancia
###Code
alpha_credito
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
crit_credito = ms.hypothesis.crit_val_norm(alpha_credito, 'two')
crit_credito
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (Z)
###Code
z_credito = ms.generals.get_z_2prop(p_hombres, p_mujeres, n_hombres, n_mujeres)
z_credito
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.hypothesis.reject_h0(crit_credito, z_credito, 'two')
###Output
_____no_output_____
###Markdown
**NO se rechaza la hipótesis nula**, teniendo en cuenta que el estadístico de prueba *1.4163146434662097* no es mayor ni menor que los valores críticos *-2.5758293035489004, 2.5758293035489004*. **Paso 6**: Conclusión Teniendo en cuenta un nivel de significancia del *1%*, se puede afirmar que la proporción de clientes hombres que compran a crédito **no es diferente** a la proporción de clientes mujeres que compran a crédito. 14. Pruebas de hipótesis para la razón de varianzas entre 2 poblacionesUn consultor gerencial desea probar una hipótesis respecto a dos medias poblacionales.Sin embargo, antes de hacerlo debe decidir si hay alguna evidencia que sugiera que lasvarianzas poblacionales son iguales. Al recolectar sus datos, el consultor encuentra que:El consultor utiliza un α = 5%
###Code
n1_gerencia = 10
var1_gerencia = 15.4**2
n2_gerencia = 10
var2_gerencia = 12.2**2
alpha_gerencia = 0.05
###Output
_____no_output_____
###Markdown
**Paso 1**: Plantear hipótesis **Ho:** σ1 = σ2 **Ha:** σ1 ≠ σ2 **Paso 2**: Nivel de significancia
###Code
alpha_gerencia = 0.05/2
alpha_gerencia
###Output
_____no_output_____
###Markdown
**Paso 3**: Valores críticos
###Code
df1_gerencia = n1_gerencia - 1
df2_gerencia = n2_gerencia - 1
crit_gerencia = ms.hypothesis.crit_val_f(df1_gerencia, df2_gerencia, alpha_gerencia)
crit_gerencia
###Output
_____no_output_____
###Markdown
**Paso 4**: Estadístico de prueba (F)
###Code
f_gerencia = ms.generals.get_f_2p(var1_gerencia, var2_gerencia)
f_gerencia
###Output
_____no_output_____
###Markdown
**Paso 5**: Decisión
###Code
ms.graph.hypothesis(ss.f(df1_gerencia, df2_gerencia), f_gerencia, alpha_gerencia, "right")
ms.hypothesis.reject_h0(crit_gerencia, f_gerencia, "right")
###Output
_____no_output_____ |
Jupyter Notebook/Ackley_function.ipynb | ###Markdown
###Code
#importing libraries
import numpy as np
from matplotlib import pyplot as plt
import math
from mpl_toolkits.mplot3d import Axes3D
def ackley_function(x1,x2):
#returns the point value of the given coordinate
part_1 = -0.2*math.sqrt(0.5*(x1*x1 + x2*x2))
part_2 = 0.5*(math.cos(2*math.pi*x1) + math.cos(2*math.pi*x2))
value = math.exp(1) + 20 -20*math.exp(part_1) - math.exp(part_2)
#returning the value
return value
def ackley_function_range(x_range_array):
#returns an array of values for the given x range of values
value = np.empty([len(x_range_array[0])])
for i in range(len(x_range_array[0])):
#returns the point value of the given coordinate
part_1 = -0.2*math.sqrt(0.5*(x_range_array[0][i]*x_range_array[0][i] + x_range_array[1][i]*x_range_array[1][i]))
part_2 = 0.5*(math.cos(2*math.pi*x_range_array[0][i]) + math.cos(2*math.pi*x_range_array[1][i]))
value_point = math.exp(1) + 20 -20*math.exp(part_1) - math.exp(part_2)
value[i] = value_point
#returning the value array
return value
def plot_ackley_general():
#this function will plot a general ackley function just to view it.
limit = 1000 #number of points
#common lower and upper limits for both x1 and x2 are used
lower_limit = -5
upper_limit = 5
#generating x1 and x2 values
x1_range = [np.random.uniform(lower_limit,upper_limit) for x in range(limit)]
x2_range = [np.random.uniform(lower_limit,upper_limit) for x in range(limit)]
#This would be the input for the Function
x_range_array = [x1_range,x2_range]
#generate the z range
z_range = ackley_function_range(x_range_array)
#plotting the function
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter(x1_range, x2_range, z_range, label='Ackley Function')
def plot_ackley(x1_range,x2_range):
#This would be the input for the Function
x_range_array = [x1_range,x2_range]
#generate the z range
z_range = ackley_function_range(x_range_array)
#plotting the function
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter(x1_range, x2_range, z_range, label='Ackley Function')
###Output
_____no_output_____
###Markdown
1. Example on Using the Function ackley_function
###Code
x1 =0
x2 =0
value = ackley_function(x1,x2)
value
###Output
_____no_output_____
###Markdown
2. Example on Using the Function ackley_function_range
###Code
count = 1000
lower_limit = -5
upper_limit = 5
x1_range = [np.random.uniform(lower_limit,upper_limit) for x in range(limit)]
x2_range = [np.random.uniform(lower_limit,upper_limit) for x in range(limit)]
#This would be the input for the Function
x_range_array = [x1_range,x2_range]
z_range = ackley_function_range(x_range_array)
#printing 0 to 20 values, in order to view few results.
z_range[0:20]
###Output
_____no_output_____
###Markdown
3. Example on Using the Function plot_ackley_general. A general plotting function to view the function.
###Code
plot_ackley_general()
###Output
_____no_output_____
###Markdown
4. Example on Using the Function plot_ackley. A plotting function to with parameter inputs
###Code
limit = 1000 #number of points
#common lower and upper limits for both x1 and x2 are used
lower_limit = -5
upper_limit = 5
#generating x1 and x2 values
x1_range = [np.random.uniform(lower_limit,upper_limit) for x in range(limit)]
x2_range = [np.random.uniform(lower_limit,upper_limit) for x in range(limit)]
#using the function to plot
plot_ackley(x1_range,x2_range)
###Output
_____no_output_____ |
ingestion/Upload_DF_to_Postgres - Working.ipynb | ###Markdown
You have to Create Database from Postgresql. Do not create the Table , following function will create the Table but not DB.- When you run this script it will create table in Postgresql and populate the all information from your Data Frame to Postgresql.- User only need to run the script and input the asking information.- User need to give exact path CSV file. (C:\\Users\\User\\Documents\\example.csv)- Default Postgresql Username : postgres.- Defaul Postgresql Password : postgres.- If you change your username and password please use them.
###Code
from platform import python_version
print(python_version())
#Import all the module that I need.
from sqlalchemy import create_engine
import psycopg2
import pandas as pd
#path = input('Enter the Path of CSV file : ')
#db_name = input('Enter Database Name : ') #prompt user for existing db name
#user_name = input('Enter Postgres User Name : ')
#password = input('Enter Postges Password : ')
#table_name = input('Enter Table Name to Create : ') #user naming the db table
#2015_097_Contracts_Full_20191009/2015_097_Contracts_Full_20191010_1.csv'
path = r'/home/team/Documents/Data-Oriented-Proposal-Engine/SpendingData/professional-services_us-based_no-mod.csv'
db_name, user_name, password, table_name = 'usaspending', 'team', 'welcome', 'allYears'
print('Connecting to Postgresql...\n')
engine = create_engine('postgresql+psycopg2://{}:{}@localhost/{}'.format(user_name,password,db_name)) #create connection to db
print('Successfully Connected to Postgres\n')
#Create Pandas DataFrame to open our csv file.
print('Creating Data Frame...\n')
chunksize = 10 ** 6
x=0
#Define function to connect db , create table and populate csv file values to Postgres Database.
def upload_DF_to_postgres(df_to_upload, table_name=table_name, user_name=user_name, password=password, db_name=db_name):
df_to_upload.to_sql(table_name, engine, if_exists='append')
for chunk in pd.read_csv(path, engine='python', encoding='utf8', chunksize=chunksize):
x+=1
upload_DF_to_postgres(chunk)
print(f"Uploaded {x} chunk")
print('Data Frame Successfully Created\n')
print('CSV file Successfully Uploaded to Postgres')
###Output
Creating Data Frame...
Uploaded 1 chunk
Data Frame Successfully Created
CSV file Successfully Uploaded to Postgres
###Markdown
You have to Create Database from Postgresql. Do not create the Table , following function will create the Table but not DB.- When you run this script it will create table in Postgresql and populate the all information from your Data Frame to Postgresql.- User only need to run the script and input the asking information.- User need to give exact path CSV file. (C:\\Users\\User\\Documents\\example.csv)- Default Postgresql Username : postgres.- Defaul Postgresql Password : postgres.- If you change your username and password please use them.
###Code
from platform import python_version
print(python_version())
#Import all the module that I need.
from sqlalchemy import create_engine
import psycopg2
import pandas as pd
path = r'/home/team/Documents/Data-Oriented-Proposal-Engine/SpendingData/2017_097_Contracts_Full_20191009/2017_097_Contracts_Full_20191010_1.csv'
db_name, ip, user_name, password, table_name = 'usaspending', 'dopelytics.site:5432','team', 'ZAQ!@#zaq123', '2017_1'
print('Connecting to Postgresql...\n')
engine = create_engine('postgresql+psycopg2://{}:{}@{}/{}'.format(user_name,password,ip,db_name)) #create connection to db
print('Successfully Connected to Postgres\n')
#Create Pandas DataFrame to open our csv file.
print('Creating Data Frame...\n')
chunksize = 10 ** 5
x=0
#Define function to connect db , create table and populate csv file values to Postgres Database.
def upload_DF_to_postgres(df_to_upload, table_name=table_name, user_name=user_name, password=password, db_name=db_name):
df_to_upload.to_sql(table_name, engine, if_exists='append')
for chunk in pd.read_csv(path, engine='python', encoding='utf8', chunksize=chunksize):
if x < 1000:
try:
upload_DF_to_postgres(chunk)
print(f"Uploaded {x} chunk")
except:
print(f"Failed to upload {x} chunk")
pass
x+=1
print('Data Frame Successfully Created\n')
print('CSV file Successfully Uploaded to Postgres')
chunk.shape
###Output
_____no_output_____ |
Exploratory_Data_Analysis_notebook.ipynb | ###Markdown
EDA Now that we have our initial dataset, we'll begin cleaning it up in order to format our data for a ML problem.
###Code
# We can get more information by describing our dataset
df.info(verbose = True, null_counts= True)
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 472 entries, 0 to 471
Data columns (total 116 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Company_Name 472 non-null object
1 Dependent-Company Status 472 non-null object
2 year of founding 472 non-null object
3 Age of company in years 428 non-null object
4 Internet Activity Score 407 non-null float64
5 Short Description of company profile 323 non-null object
6 Industry of company 348 non-null object
7 Focus functions of company 442 non-null object
8 Investors 332 non-null object
9 Employee Count 306 non-null float64
10 Employees count MoM change 267 non-null float64
11 Has the team size grown 422 non-null object
12 Est. Founding Date 363 non-null object
13 Last Funding Date 350 non-null object
14 Last Funding Amount 312 non-null float64
15 Country of company 401 non-null object
16 Continent of company 401 non-null object
17 Number of Investors in Seed 472 non-null object
18 Number of Investors in Angel and or VC 472 non-null object
19 Number of Co-founders 472 non-null int64
20 Number of of advisors 472 non-null int64
21 Team size Senior leadership 472 non-null int64
22 Team size all employees 472 non-null object
23 Presence of a top angel or venture fund in previous round of investment 472 non-null object
24 Number of of repeat investors 472 non-null object
25 Number of Sales Support material 472 non-null object
26 Worked in top companies 472 non-null object
27 Average size of companies worked for in the past 472 non-null object
28 Have been part of startups in the past? 472 non-null object
29 Have been part of successful startups in the past? 472 non-null object
30 Was he or she partner in Big 5 consulting? 472 non-null object
31 Consulting experience? 472 non-null object
32 Product or service company? 472 non-null object
33 Catering to product/service across verticals 472 non-null object
34 Focus on private or public data? 472 non-null object
35 Focus on consumer data? 472 non-null object
36 Focus on structured or unstructured data 472 non-null object
37 Subscription based business 472 non-null object
38 Cloud or platform based serive/product? 472 non-null object
39 Local or global player 472 non-null object
40 Linear or Non-linear business model 472 non-null object
41 Capital intensive business e.g. e-commerce, Engineering products and operations can also cause a business to be capital intensive 472 non-null object
42 Number of of Partners of company 472 non-null object
43 Crowdsourcing based business 472 non-null object
44 Crowdfunding based business 472 non-null object
45 Machine Learning based business 472 non-null object
46 Predictive Analytics business 472 non-null object
47 Speech analytics business 472 non-null object
48 Prescriptive analytics business 472 non-null object
49 Big Data Business 472 non-null object
50 Cross-Channel Analytics/ marketing channels 472 non-null object
51 Owns data or not? (monetization of data) e.g. Factual 472 non-null object
52 Is the company an aggregator/market place? e.g. Bluekai 472 non-null object
53 Online or offline venture - physical location based business or online venture? 472 non-null object
54 B2C or B2B venture? 472 non-null object
55 Top forums like 'Tech crunch' or 'Venture beat' talking about the company/model - How much is it being talked about? 472 non-null object
56 Average Years of experience for founder and co founder 472 non-null object
57 Exposure across the globe 472 non-null object
58 Breadth of experience across verticals 472 non-null object
59 Highest education 472 non-null object
60 Years of education 472 non-null object
61 Specialization of highest education 375 non-null object
62 Relevance of education to venture 472 non-null object
63 Relevance of experience to venture 472 non-null object
64 Degree from a Tier 1 or Tier 2 university? 472 non-null object
65 Renowned in professional circle 472 non-null object
66 Experience in selling and building products 472 non-null object
67 Experience in Fortune 100 organizations 472 non-null object
68 Experience in Fortune 500 organizations 472 non-null object
69 Experience in Fortune 1000 organizations 472 non-null object
70 Top management similarity 472 non-null object
71 Number of Recognitions for Founders and Co-founders 472 non-null object
72 Number of of Research publications 472 non-null object
73 Skills score 472 non-null object
74 Team Composition score 472 non-null object
75 Dificulty of Obtaining Work force 472 non-null object
76 Pricing Strategy 472 non-null object
77 Hyper localisation 472 non-null object
78 Time to market service or product 472 non-null object
79 Employee benefits and salary structures 472 non-null object
80 Long term relationship with other founders 472 non-null object
81 Proprietary or patent position (competitive position) 472 non-null object
82 Barriers of entry for the competitors 472 non-null object
83 Company awards 472 non-null object
84 Controversial history of founder or co founder 472 non-null object
85 Legal risk and intellectual property 472 non-null object
86 Client Reputation 472 non-null object
87 google page rank of company website 472 non-null object
88 Technical proficiencies to analyse and interpret unstructured data 472 non-null object
89 Solutions offered 472 non-null object
90 Invested through global incubation competitions? 472 non-null object
91 Industry trend in investing 390 non-null float64
92 Disruptiveness of technology 472 non-null object
93 Number of Direct competitors 472 non-null object
94 Employees per year of company existence 472 non-null object
95 Last round of funding received (in milionUSD) 472 non-null object
96 Survival through recession, based on existence of the company through recession times 472 non-null object
97 Time to 1st investment (in months) 472 non-null object
98 Avg time to investment - average across all rounds, measured from previous investment 472 non-null object
99 Gartner hype cycle stage 300 non-null object
100 Time to maturity of technology (in years) 300 non-null object
101 Percent_skill_Entrepreneurship 472 non-null object
102 Percent_skill_Operations 472 non-null object
103 Percent_skill_Engineering 472 non-null object
104 Percent_skill_Marketing 472 non-null object
105 Percent_skill_Leadership 472 non-null object
106 Percent_skill_Data Science 472 non-null object
107 Percent_skill_Business Strategy 472 non-null object
108 Percent_skill_Product Management 472 non-null object
109 Percent_skill_Sales 472 non-null object
110 Percent_skill_Domain 472 non-null object
111 Percent_skill_Law 472 non-null object
112 Percent_skill_Consulting 472 non-null object
113 Percent_skill_Finance 472 non-null object
114 Percent_skill_Investment 472 non-null object
115 Renown score 472 non-null object
dtypes: float64(5), int64(3), object(108)
memory usage: 427.9+ KB
###Markdown
We can see right away that there's a good amount of columns that do not have null values. Now, there's a particular column that seems interesting, and that's the dependent company status.
###Code
# Let's take a look at the column
df['Dependent-Company Status'].value_counts()
# The column describes if the startup succeeded or failed. This seems to be a classification metric we can predict.
# Our predicting column will then be Dependent-Company Status
###Output
_____no_output_____
###Markdown
Now, although most columns don't seem to have many missing values, the percentages showed a lot of 0s, which might be in fact their place holder for NaNs.
###Code
df['Percent_skill_Investment'].value_counts()
###Output
_____no_output_____
###Markdown
In a EDA perspective, we can see that the large amount of both 0s and "No Info" makes the columns a lot less complete. At the same time, one has to think how reliable these metrics are. There are no standard forms of measuring "Percent skill" on a particular area, so none of these columns might actually give us good information. We're better off dropping them.
###Code
# Wrangle our data
def wrangle(X):
# Make a copy
X = X.copy()
cols = ['Percent_skill_Entrepreneurship', 'Percent_skill_Operations', 'Percent_skill_Engineering', 'Percent_skill_Marketing',
'Percent_skill_Leadership', 'Percent_skill_Data Science', 'Percent_skill_Business Strategy', 'Percent_skill_Product Management',
'Percent_skill_Sales', 'Percent_skill_Domain', 'Percent_skill_Law', 'Percent_skill_Consulting',
'Percent_skill_Finance', 'Percent_skill_Investment']
X.drop(columns=cols, inplace=True)
# X = X[X['amount_tsh'].between(X['amount_tsh'].quantile(0.02), X['amount_tsh'].quantile(0.98))]
return X
# Wrangle the dataframe
df = wrangle(df)
# Take a look at the dataframe's head
df.head()
#Company name and description are basically unique for each company. The model won't be able to figure out if certain words within the name or description
# Allow for a better chance of the startup suceeding, so instead we'll just drop them.
cols = ['Company_Name', 'Short Description of company profile']
df.drop(columns = cols, inplace=True)
# Let's look at the dataframe now
df.head()
# Replace "no info" with nans
df = df.replace("No Info", np.nan)
###Output
_____no_output_____
###Markdown
This looks good enough for now. There are a couple columns we'll have to manipualte in order for them to become good features. Other than that, we need to decide on our train/validation/test split.
###Code
# Take a look at the shape
df.shape
###Output
_____no_output_____
###Markdown
472 observations. Not too many, so we probably don't want to try using a validation set here. Instead, we'll use k-fold validation to still have our model train on a relatively large amount of data, and give us feedback on it's performance.We do however want a test set. We do have dates, but since some of them are missing values, we're better off with random selection.More EDA still has to be done, but we know we'll use 15% of our data as our training.
###Code
# Look at the head of Industry of company
df['Industry of company'].head(20)
###Output
_____no_output_____
###Markdown
Process of finding the count for all unique values within industry of company
###Code
dtest = df['Industry of company'].replace(np.NaN, "Unkown")
type(dtest)
text = "|".join(dtest)
len(text)
print(text)
industrylist = text.split('|')
print(industrylist)
from collections import Counter
print(Counter(industrylist))
###Output
Counter({'Analytics': 197, 'Unkown': 124, 'Marketing': 66, 'Mobile': 54, 'E-Commerce': 54, 'Advertising': 37, 'Enterprise Software': 30, 'Media': 28, 'Cloud Computing': 19, 'Network / Hosting / Infrastructure': 19, 'Software Development': 18, 'Social Networking': 18, 'Retail': 15, 'Entertainment': 15, 'Healthcare': 12, 'Energy': 11, 'Search': 11, 'Market Research': 8, 'Food & Beverages': 8, 'Finance': 8, 'Music': 8, 'Publishing': 8, 'Gaming': 7, 'Email': 7, 'Career / Job Search': 7, 'Security': 6, 'Human Resources (HR)': 6, 'Education': 6, 'Telecommunications': 6, 'CleanTech': 5, 'Hospitality': 4, 'Deals': 3, 'Real Estate': 3, 'Crowdfunding': 2, 'Transportation': 2, 'Classifieds': 2, 'Pharmaceuticals': 1, 'Insurance': 1, 'Space Travel': 1, 'Travel': 1, 'Government': 1, 'energy': 1, 'analytics': 1})
###Markdown
Now that we know all the unique fields. We'll make columns for each industry that appears seven or more times in the dataset. We'll make a column for each one of them.
###Code
df['Market Research'] = df['Industry of company'].str.contains('Market Research')
df['Marketing'] = df['Industry of company'].str.contains('Marketing')
df['Analytics'] = df['Industry of company'].str.contains('Analytics')
df['Software Development'] = df['Industry of company'].str.contains('Software Development')
df['Mobile'] = df['Industry of company'].str.contains('Mobile')
df['Enterprise Software'] = df['Industry of company'].str.contains('Enterprise Software')
df['Media'] = df['Industry of company'].str.contains('Media')
df['Cloud Computing'] = df['Industry of company'].str.contains('Cloud Computing')
df['Network / Hosting / Infrastructure'] = df['Industry of company'].str.contains('Network / Hosting / Infrastructure')
df['Social Networking'] = df['Industry of company'].str.contains('Social Networking')
df['Retail'] = df['Industry of company'].str.contains('Retail')
df['Entertainment'] = df['Industry of company'].str.contains('Entertainment')
df['Healthcare'] = df['Industry of company'].str.contains('Healthcare')
df['Energy'] = df['Industry of company'].str.contains('Energy')
df['Search'] = df['Industry of company'].str.contains('Search')
df['Market Research'] = df['Industry of company'].str.contains('Market Research')
df['Food & Beverages'] = df['Industry of company'].str.contains('Food & Beverages')
df['Music'] = df['Industry of company'].str.contains('Music')
df['Publishing'] = df['Industry of company'].str.contains('Publishing')
df['Gaming'] = df['Industry of company'].str.contains('Gaming')
df['Email'] = df['Industry of company'].str.contains('Email')
df['Career / Job Search'] = df['Industry of company'].str.contains('Career / Job Search')
# Take a look at the dataset now.
df.head()
###Output
_____no_output_____
###Markdown
We'll now continue our EDA process to maximize the usabulity of this dataset for a ML problem
###Code
# We'll also encode our true and false observations for the industries
df = df.replace({True: 1, False: 0})
# Wrangle our data
def wrangle_t(X):
# Make a copy
X = X.copy()
cols = ['Industry of company', 'Age of company in years', 'Investors', 'Last Funding Date', 'Continent of company', 'Team size all employees', 'Years of education']
X.drop(columns=cols, inplace=True)
# X = X[X['amount_tsh'].between(X['amount_tsh'].quantile(0.02), X['amount_tsh'].quantile(0.98))]
return X# Make our dataframe into the wrangled form
# Make our dataframe into the wrangled form
df = wrangle_t(df)
# Look at the dataframe now.
df.head()
###Output
_____no_output_____
###Markdown
There's no features here that may be made after the respective company has failed or suceeded, so there is no feature leakage
###Code
df['Dependent-Company Status'].value_counts()
###Output
_____no_output_____
###Markdown
To choose my target, I began looking around the many categories the dataset had.After taking a look at all the categories, I decided to go with the Dependent-Company Status, which is the column describing wether the startup suceeds or fails.I chose this category due to it's incredibly low cardinality, making it perfect to tackle as a classification problem. That and it's lack of missing values made it the perfect cardinality for the target vector.So now, we'll divide the data accordingly.
###Code
# Make our X and y dataframes.
X = df.drop('Dependent-Company Status', axis=1)
y = df['Dependent-Company Status']
X.to_csv('X_features.csv')
y.to_csv('y_column.csv')
###Output
_____no_output_____
###Markdown
Now we'll divide our data into three subsets. Our training to train the model, our validation to see how our model performs, and then our test to see how our model will perform outside of the data it was given.We're doing this so as to not use our metrics on data the model was trained on, as to avoid data leakage.
###Code
# Split the data
X_train_l, X_test, y_train_l, y_test = train_test_split(X, y, train_size=0.90)
X_train, X_val, y_train, y_val = train_test_split(X_train_l, y_train_l, train_size=0.85)
###Output
_____no_output_____
###Markdown
Since this is a categorical problem, we'll use the largest instance of the target vector as our baseline. That is to say that we'll assume that all instances are the most apparent instance. We do this as to set a minimum score that our model has to beat.
###Code
# Establish the baseline
print("The baseline of our model is: ", y_train.value_counts(normalize=True).max(), "%")
###Output
The baseline of our model is: 0.65 %
###Markdown
Make our model
###Code
# Make the pipeline
model = make_pipeline(
OrdinalEncoder(),
SimpleImputer(),
StandardScaler(),
LogisticRegression()
)
# Fit our model
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Choosing our metricWe're dealing with a classification problem. That means that there's some metrics that do not work well with out model. We'll use three metrics: Accuracy, precision and recall. Accuracy is just to give us an overall idea of the model's performance, and precision and recall will give us a better specific idea of how the model is performing. This is because: Precision: How likely is the startup predicted to succeed to actually be successful? Recall: How likely is the startup predicted to fail to actually fail?
###Code
# First we'll look at the training accuracy
print("Model's accuracy on training data: ", model.score(X_train, y_train))
# Now, we'll see our valiation accuracy
print("Model's accuracy on training data: ", model.score(X_val, y_val))
# Plot a confusion matrix
fig, ax = plt.subplots(1,1, figsize=(8,8))
plot_confusion_matrix(model, X_val, y_val,
display_labels=['Success', 'Failed'],
)
fig.clf()
# See our precision and recall
y_pred = model.predict(X_val)
print (metrics.classification_report(y_val, y_pred))
###Output
precision recall f1-score support
Failed 0.91 0.84 0.87 25
Success 0.90 0.95 0.92 39
accuracy 0.91 64
macro avg 0.91 0.89 0.90 64
weighted avg 0.91 0.91 0.91 64
###Markdown
Pretty good! This model does a great job at predicting failed startups correctly and predicting successful startups correctly. Now, let's try doing it with a tree-based model, and see how our performance differs.
###Code
# Make the pipeline
model_t = make_pipeline(
OrdinalEncoder(),
SimpleImputer(),
XGBClassifier()
)
model_t.fit(X_train, y_train)
# First we'll look at the training accuracy
print("Model's accuracy on training data: ", model_t.score(X_train, y_train))
# Now, we'll see our valiation accuracy
print("Model's accuracy on training data: ", model_t.score(X_val, y_val))
# Plot a confusion matrix
fig, ax = plt.subplots(1,1, figsize=(8,8))
plot_confusion_matrix(model_t, X_val, y_val,
display_labels=['Success', 'Failed'],
)
fig.clf()
# See our precision and recall
y_pred_t = model_t.predict(X_val)
print (metrics.classification_report(y_val, y_pred_t))
###Output
precision recall f1-score support
Failed 0.91 0.84 0.87 25
Success 0.90 0.95 0.92 39
accuracy 0.91 64
macro avg 0.91 0.89 0.90 64
weighted avg 0.91 0.91 0.91 64
###Markdown
Both seem to perform relatively equally. This is most likely due to the small size of the validation data, which we had to operate due to the small amount of observations we had. A good fix for this is to try k-fold cross validation, as that might give us better results. First, we'll do our linear model
###Code
# Instantiate the k-fold cross-validation
kfold_cv = KFold(n_splits=5, shuffle=True, random_state=11)
# Fit the model using k-fold cross-validation
cv_scores = cross_val_score(model, X_train_l, y_train_l,
cv=kfold_cv)
# Print the mean score
print('All cv scores: ', cv_scores)
# Print the mean score
print('Mean of all cv scores: ', cv_scores.mean())
###Output
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
###Markdown
Now our XGBoost model
###Code
# Instantiate the k-fold cross-validation
kfold_cv = KFold(n_splits=5, shuffle=True, random_state=11)
# Fit the model using k-fold cross-validation
cv_scores = cross_val_score(model_t, X_train_l, y_train_l,
cv=kfold_cv)
# Print the mean score
print('All cv scores: ', cv_scores)
# Print the mean score
print('Mean of all cv scores: ', cv_scores.mean())
###Output
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
###Markdown
Great! By using K-fold cross validation, we can see our xgboost model performs far better!Now, let's select that model, and see how it performs against the test data.
###Code
model_t.fit(X_train_l, y_train_l)
# First we'll look at the training accuracy
print("Model's accuracy on training data: ", model_t.score(X_train_l, y_train_l))
# Now, we'll see our valiation accuracy
print("Model's accuracy on training data: ", model_t.score(X_test, y_test))
# Plot a confusion matrix
fig, ax = plt.subplots(1,1, figsize=(8,8))
plot_confusion_matrix(model_t, X_test, y_test,
display_labels=['Success', 'Failed'],
)
fig.clf()
# See our precision and recall
y_pred_t = model_t.predict(X_test)
print (metrics.classification_report(y_test, y_pred_t))
###Output
precision recall f1-score support
Failed 0.93 0.88 0.90 16
Success 0.94 0.97 0.95 32
accuracy 0.94 48
macro avg 0.94 0.92 0.93 48
weighted avg 0.94 0.94 0.94 48
|
workshop/01. pandas => Hockey fights - working copy.ipynb | ###Markdown
Hockey fightsDavid Singer, the gentleman who runs [hockeyfights dot com](http://www.hockeyfights.com/), was kind enough to provide us with a cut of the data powering his website for us to use in training sessions. Thanks, David!This data lives here: `../data/hockey-fights.xlsx`. Every row in the data is one fight.Let's take a look, eh?First, we'll import pandas, then we'll use the `read_excel()` method to load the data into a dataframe. (Note: To use this functionality, we'll also need the `xlrd` library, which luckily we've installed already.)
###Code
# import pandas
# read excel sheet into a data frame, specify sheet_name is 'fights'
# check it out with head()
###Output
_____no_output_____
###Markdown
Check out the data
###Code
# info()
# min date
# max date
# unique list of away teams
# unique list of home teams
# etc ...
###Output
_____no_output_____
###Markdown
Come up with a list of questions- Which player was involved in the most fights?- Average number of fights per game?- What was the longest fight?... what else? Q: Which player was involved in the most fights?This one will be a little tricky because of how the data is structured -- a player could be fighting either as the home or away player, so there's not an obvious column to group or pivot on. There are a couple of strategies we could use to answer this question, but here's what we're going to do:- Use the [`concat()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html) method to stack the column values in each player ID column into one Series (we're using player ID instead of name to avoid the "John Smith" problem (or, I guess, "Graham MacKenzie"))- Use `value_counts()` to get a count- Grab the player ID with the most fights by getting the first ([0]) element in the `index` list for the Series returned by `value_counts()`- Go back to the original data frame and filter for that ID, then use [`iloc`](https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.iloc.html) to get a single fight record with the player's name, team, etc.
###Code
# use concat to stack the home and away player IDs
# use value_counts() to get a frequency count, then grab the top (first) one
# filter the main data frame for fights involving that player
# arbitrarily, i have chosen to filter on the away_player_id column
# and grab the first record with iloc
# get the away player's name from this fight
# and his team name
# and print them
###Output
_____no_output_____
###Markdown
Q: Average number of fights per game?This one will be pretty easy. We need two numbers: The total number of fights -- which is the same as asking how many records are in our original data frame -- and the total number of games, which will just involve counting the unique number of games in our data.To get the number of records in our data frame, we shall use the `shape` attribute, which returns a [tuple](https://www.tutorialspoint.com/python/python_tuples.htm) with two things: the number of rows (the first thing) and the number of columns (the second thing). You can access items in a tuple [just like you'd access items in a list](../reference/Python%20data%20types%20and%20basic%20syntax.ipynbLists): With square brackets `[]` and the index number of the thing you're trying to get.To get the number of unique games, we're going to use the [`nunique()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nunique.html) method to get the number of unique game IDs. (How did I know about the `nunique()` method? I didn't until I Googled "pandas count unique values series.")
###Code
# check the shape
# num_fights is the first item in that tuple
# alternatively, you could do len(df)
# num_games is the number of unique values in the game_id column
# avg_fights_per_game is fights divided into games
# print the result
###Output
_____no_output_____
###Markdown
Q: What was the longest fight?We have fight duration as a mixture of minutes and seconds, so we first need to convert to seconds ((minutes * 60) + seconds). We'll create a new column, `fight_duration`, for this.
###Code
# create a new column, fight_duration, that takes fight minutes, times 60, plus fight seconds
###Output
_____no_output_____
###Markdown
Now it's just a matter of sorting our data frame top to bottom by that new column, with `sort_values()`, and using [`.iloc[0]`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html) to grab the first record.
###Code
# sort values descending by fight_duration descending
# and grab the first record with iloc[0]
###Output
_____no_output_____ |
examples/notebooks/29_pydeck.ipynb | ###Markdown
[](https://githubtocolab.com/giswqs/leafmap/blob/master/examples/notebooks/29_pydeck.ipynb)[](https://gishub.org/leafmap-binder)Uncomment the following line to install [leafmap](https://leafmap.org) if needed.
###Code
# !pip install leafmap
import leafmap.deck as leafmap
###Output
_____no_output_____
###Markdown
Create an interactive map.
###Code
m = leafmap.Map(center=(40, -100), zoom=3)
m
###Output
_____no_output_____
###Markdown
Add basemap.
###Code
m = leafmap.Map()
m.add_basemap("HYBRID")
m
###Output
_____no_output_____
###Markdown
Add vector data to the map. It supports any GeoPandas supported format, such as GeoJSON, shapefile, KML.
###Code
m = leafmap.Map()
filename = (
"https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_states.geojson"
)
m.add_vector(filename, random_color_column="STATEFP")
m
###Output
_____no_output_____
###Markdown
Add a GeoPandas GeoDataFrame to the map.
###Code
import geopandas as gpd
url = (
"https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_counties.geojson"
)
gdf = gpd.read_file(url)
m = leafmap.Map()
m.add_gdf(gdf, random_color_column="STATEFP")
m
###Output
_____no_output_____
###Markdown
Create a 3D view of the map. **Press Ctrl and hold down the left mouse button to rotate the 3D view.**
###Code
initial_view_state = {
"latitude": 40,
"longitude": -100,
"zoom": 3,
"pitch": 45,
"bearing": 10,
}
m = leafmap.Map(initial_view_state=initial_view_state)
filename = (
"https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_states.geojson"
)
m.add_vector(
filename,
random_color_column="STATEFP",
extruded=True,
get_elevation="ALAND",
elevation_scale=0.000001,
)
m
###Output
_____no_output_____
###Markdown
[](https://githubtocolab.com/giswqs/leafmap/blob/master/examples/notebooks/29_pydeck.ipynb)[](https://gishub.org/leafmap-binder)Uncomment the following line to install [leafmap](https://leafmap.org) if needed.
###Code
# !pip install leafmap
import leafmap.deck as leafmap
###Output
_____no_output_____
###Markdown
Create an interactive map.
###Code
m = leafmap.Map(center=(40,-100), zoom=3)
m
###Output
_____no_output_____
###Markdown
Add basemap.
###Code
m = leafmap.Map()
m.add_basemap("HYBRID")
m
###Output
_____no_output_____
###Markdown
Add vector data to the map. It supports any GeoPandas supported format, such as GeoJSON, shapefile, KML.
###Code
m = leafmap.Map()
filename = "https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_states.geojson"
m.add_vector(filename, random_color_column="STATEFP")
m
###Output
_____no_output_____
###Markdown
Add a GeoPandas GeoDataFrame to the map.
###Code
import geopandas as gpd
url = "https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_counties.geojson"
gdf = gpd.read_file(url)
m = leafmap.Map()
m.add_gdf(gdf, random_color_column="STATEFP")
m
###Output
_____no_output_____
###Markdown
Create a 3D view of the map. **Press Ctrl and hold down the left mouse button to rotate the 3D view.**
###Code
initial_view_state={"latitude": 40, "longitude": -100, "zoom": 3, "pitch": 45, "bearing": 10}
m = leafmap.Map(initial_view_state=initial_view_state)
filename = "https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_states.geojson"
m.add_vector(filename, random_color_column="STATEFP", extruded=True, get_elevation="ALAND", elevation_scale=0.000001)
m
###Output
_____no_output_____
###Markdown
How to use Earth Engine with pydeck for 3D visualization Requirements- [earthengine-api](https://github.com/google/earthengine-api): a Python client library for calling the Google Earth Engine API.- [pydeck](https://pydeck.gl/index.html): a WebGL-powered framework for visual exploratory data analysis of large datasets.- [pydeck-earthengine-layers](https://github.com/UnfoldedInc/earthengine-layers/tree/master/py): a pydekc wrapper for Google Earth Engine. For documentation please visit this [website](https://earthengine-layers.com/).- [Mapbox API key](https://pydeck.gl/installation.htmlgetting-a-mapbox-api-key): you will need this add basemap tiles to pydeck. Installation- conda create -n deck python- conda activate deck- conda install mamba -c conda-forge- mamba install earthengine-api pydeck pydeck-earthengine-layers -c conda-forge- jupyter nbextension install --sys-prefix --symlink --overwrite --py pydeck- jupyter nbextension enable --sys-prefix --py pydeck Using ee.Image with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Create an Earth Engine object
image = ee.Image('CGIAR/SRTM90_V4')
# Define Earth Engine visualization parameters
vis_params = {
"min": 0,
"max": 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']
}
# Create a pydeck EarthEngineLayer object, using the Earth Engine object and
# desired visualization parameters
ee_layer = EarthEngineLayer(image, vis_params)
# Define the initial viewport for the map
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Adding multiple Earth Engine images
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Add Earth Engine dataset
image = ee.Image('USGS/SRTMGL1_003')
hillshade = ee.Terrain.hillshade(image)
demRGB = image.visualize(**{
'min': 0,
'max': 4000,
'bands': ['elevation'],
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
'opacity': 0.5
})
hillshadeRGB = hillshade.visualize(**{'bands': ['hillshade']})
blend = hillshadeRGB.blend(demRGB)
ee_layer = EarthEngineLayer(blend, {})
# Define the initial viewport for the map
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.ImageCollection with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Initialize an ee.ImageColllection object referencing the Global Forecast System dataset
image_collection = ee.ImageCollection('NOAA/GFS0P25')
# Select images from December 22, 2018
image_collection = image_collection.filterDate('2018-12-22', '2018-12-23')
# Choose the first 24 images in the ImageCollection
image_collection = image_collection.limit(24)
# Select a single band to visualize
image_collection = image_collection.select('temperature_2m_above_ground')
# Style temperature values between -40C and 35C,
# with lower values shades of blue, purple, and cyan,
# and higher values shades of green, yellow, and red
vis_params = {
'min': -40.0,
'max': 35.0,
'palette': ['blue', 'purple', 'cyan', 'green', 'yellow', 'red']
};
layer = EarthEngineLayer(
image_collection,
vis_params,
animate=True,
id="global_weather")
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=1)
r = pdk.Deck(
layers=[layer],
initial_view_state=view_state
)
# layer.visible = True
# layer.opacity = 0.2
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (points) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Load the FeatureCollection
table = ee.FeatureCollection("WRI/GPPD/power_plants")
# Create color palette
fuel_color = ee.Dictionary({
'Coal': '000000',
'Oil': '593704',
'Gas': 'BC80BD',
'Hydro': '0565A6',
'Nuclear': 'E31A1C',
'Solar': 'FF7F00',
'Waste': '6A3D9A',
'Wind': '5CA2D1',
'Geothermal': 'FDBF6F',
'Biomass': '229A00'
})
# List of fuels to add to the map
fuels = ['Coal', 'Oil', 'Gas', 'Hydro', 'Nuclear', 'Solar', 'Waste', 'Wind', 'Geothermal', 'Biomass']
def add_style(point):
"""Computes size from capacity and color from fuel type.
Args:
- point: (ee.Geometry.Point) A Point
Returns:
(ee.Geometry.Point): Input point with added style dictionary
"""
size = ee.Number(point.get('capacitymw')).sqrt().divide(10).add(2)
color = fuel_color.get(point.get('fuel1'))
return point.set('styleProperty', ee.Dictionary({'pointSize': size, 'color': color}))
# Make a FeatureCollection out of the power plant data table
pp = ee.FeatureCollection(table).map(add_style)
# Create a layer for each fuel type
layers = []
for fuel in fuels:
layer = EarthEngineLayer(
pp.filter(ee.Filter.eq('fuel1', fuel)).style(styleProperty='styleProperty', neighborhood=50),
id=fuel,
opacity=0.65,
)
layers.append(layer)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(
layers=layers,
initial_view_state=view_state
)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (lines) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Hurricane tracks and points for 2017.
hurricanes = ee.FeatureCollection('NOAA/NHC/HURDAT2/atlantic')
year = '2017'
points = hurricanes.filter(ee.Filter.date(ee.Date(year).getRange('year')))
# Find all of the hurricane ids.
def get_id(point):
return ee.Feature(point).get('id')
storm_ids = points.toList(1000).map(get_id).distinct()
# Create a line for each hurricane.
def create_line(storm_id):
pts = points.filter(ee.Filter.eq('id', ee.String(storm_id)))
pts = pts.sort('system:time_start')
line = ee.Geometry.LineString(pts.geometry().coordinates())
feature = ee.Feature(line)
return feature.set('id', storm_id)
lines = ee.FeatureCollection(storm_ids.map(create_line))
lines_layer = EarthEngineLayer(
lines,
{'color': 'red'},
id="tracks",
)
points_layer = EarthEngineLayer(
points,
{'color': 'green'},
id="points",
)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(
layers=[points_layer, lines_layer],
initial_view_state=view_state
)
r.show()
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
dataset = ee.FeatureCollection('FAO/GAUL/2015/level0')
countries = dataset.style(
fillColor='b5ffb4',
color='00909F',
width=3
)
layer = EarthEngineLayer(countries, id="international_boundaries")
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=3)
r = pdk.Deck(
layers=[layer],
initial_view_state=view_state
)
r.show()
###Output
_____no_output_____
###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap
###Output
_____no_output_____
###Markdown
How to use Earth Engine with pydeck for 3D visualization Requirements- [earthengine-api](https://github.com/google/earthengine-api): a Python client library for calling the Google Earth Engine API.- [pydeck](https://pydeck.gl/index.html): a WebGL-powered framework for visual exploratory data analysis of large datasets.- [pydeck-earthengine-layers](https://github.com/UnfoldedInc/earthengine-layers/tree/master/py): a pydekc wrapper for Google Earth Engine. For documentation please visit this [website](https://earthengine-layers.com/).- [Mapbox API key](https://pydeck.gl/installation.htmlgetting-a-mapbox-api-key): you will need this add basemap tiles to pydeck. Installation- conda create -n deck python- conda activate deck- conda install mamba -c conda-forge- mamba install earthengine-api pydeck pydeck-earthengine-layers -c conda-forge- jupyter nbextension install --sys-prefix --symlink --overwrite --py pydeck- jupyter nbextension enable --sys-prefix --py pydeck Using ee.Image with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Create an Earth Engine object
image = ee.Image('CGIAR/SRTM90_V4')
# Define Earth Engine visualization parameters
vis_params = {
"min": 0,
"max": 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
}
# Create a pydeck EarthEngineLayer object, using the Earth Engine object and
# desired visualization parameters
ee_layer = EarthEngineLayer(image, vis_params)
# Define the initial viewport for the map
view_state = pdk.ViewState(
latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45
)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Adding multiple Earth Engine images
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Add Earth Engine dataset
image = ee.Image('USGS/SRTMGL1_003')
hillshade = ee.Terrain.hillshade(image)
demRGB = image.visualize(
**{
'min': 0,
'max': 4000,
'bands': ['elevation'],
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
'opacity': 0.5,
}
)
hillshadeRGB = hillshade.visualize(**{'bands': ['hillshade']})
blend = hillshadeRGB.blend(demRGB)
ee_layer = EarthEngineLayer(blend, {})
# Define the initial viewport for the map
view_state = pdk.ViewState(
latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45
)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.ImageCollection with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Initialize an ee.ImageColllection object referencing the Global Forecast System dataset
image_collection = ee.ImageCollection('NOAA/GFS0P25')
# Select images from December 22, 2018
image_collection = image_collection.filterDate('2018-12-22', '2018-12-23')
# Choose the first 24 images in the ImageCollection
image_collection = image_collection.limit(24)
# Select a single band to visualize
image_collection = image_collection.select('temperature_2m_above_ground')
# Style temperature values between -40C and 35C,
# with lower values shades of blue, purple, and cyan,
# and higher values shades of green, yellow, and red
vis_params = {
'min': -40.0,
'max': 35.0,
'palette': ['blue', 'purple', 'cyan', 'green', 'yellow', 'red'],
}
layer = EarthEngineLayer(
image_collection, vis_params, animate=True, id="global_weather"
)
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=1)
r = pdk.Deck(layers=[layer], initial_view_state=view_state)
# layer.visible = True
# layer.opacity = 0.2
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (points) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Load the FeatureCollection
table = ee.FeatureCollection("WRI/GPPD/power_plants")
# Create color palette
fuel_color = ee.Dictionary(
{
'Coal': '000000',
'Oil': '593704',
'Gas': 'BC80BD',
'Hydro': '0565A6',
'Nuclear': 'E31A1C',
'Solar': 'FF7F00',
'Waste': '6A3D9A',
'Wind': '5CA2D1',
'Geothermal': 'FDBF6F',
'Biomass': '229A00',
}
)
# List of fuels to add to the map
fuels = [
'Coal',
'Oil',
'Gas',
'Hydro',
'Nuclear',
'Solar',
'Waste',
'Wind',
'Geothermal',
'Biomass',
]
def add_style(point):
"""Computes size from capacity and color from fuel type.
Args:
- point: (ee.Geometry.Point) A Point
Returns:
(ee.Geometry.Point): Input point with added style dictionary
"""
size = ee.Number(point.get('capacitymw')).sqrt().divide(10).add(2)
color = fuel_color.get(point.get('fuel1'))
return point.set(
'styleProperty', ee.Dictionary({'pointSize': size, 'color': color})
)
# Make a FeatureCollection out of the power plant data table
pp = ee.FeatureCollection(table).map(add_style)
# Create a layer for each fuel type
layers = []
for fuel in fuels:
layer = EarthEngineLayer(
pp.filter(ee.Filter.eq('fuel1', fuel)).style(
styleProperty='styleProperty', neighborhood=50
),
id=fuel,
opacity=0.65,
)
layers.append(layer)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(layers=layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (lines) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Hurricane tracks and points for 2017.
hurricanes = ee.FeatureCollection('NOAA/NHC/HURDAT2/atlantic')
year = '2017'
points = hurricanes.filter(ee.Filter.date(ee.Date(year).getRange('year')))
# Find all of the hurricane ids.
def get_id(point):
return ee.Feature(point).get('id')
storm_ids = points.toList(1000).map(get_id).distinct()
# Create a line for each hurricane.
def create_line(storm_id):
pts = points.filter(ee.Filter.eq('id', ee.String(storm_id)))
pts = pts.sort('system:time_start')
line = ee.Geometry.LineString(pts.geometry().coordinates())
feature = ee.Feature(line)
return feature.set('id', storm_id)
lines = ee.FeatureCollection(storm_ids.map(create_line))
lines_layer = EarthEngineLayer(
lines,
{'color': 'red'},
id="tracks",
)
points_layer = EarthEngineLayer(
points,
{'color': 'green'},
id="points",
)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(layers=[points_layer, lines_layer], initial_view_state=view_state)
r.show()
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
dataset = ee.FeatureCollection('FAO/GAUL/2015/level0')
countries = dataset.style(fillColor='b5ffb4', color='00909F', width=3)
layer = EarthEngineLayer(countries, id="international_boundaries")
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=3)
r = pdk.Deck(layers=[layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap
###Output
_____no_output_____
###Markdown
How to use Earth Engine with pydeck for 3D visualization Requirements- [earthengine-api](https://github.com/google/earthengine-api): a Python client library for calling the Google Earth Engine API.- [pydeck](https://pydeck.gl/index.html): a WebGL-powered framework for visual exploratory data analysis of large datasets.- [pydeck-earthengine-layers](https://github.com/UnfoldedInc/earthengine-layers/tree/master/py): a pydeck wrapper for Google Earth Engine. For documentation please visit this [website](https://earthengine-layers.com/).- [Mapbox API key](https://pydeck.gl/installation.htmlgetting-a-mapbox-api-key): you will need this add basemap tiles to pydeck. Installation- conda create -n deck python- conda activate deck- conda install mamba -c conda-forge- mamba install earthengine-api pydeck pydeck-earthengine-layers -c conda-forge- jupyter nbextension install --sys-prefix --symlink --overwrite --py pydeck- jupyter nbextension enable --sys-prefix --py pydeck Using ee.Image with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Create an Earth Engine object
image = ee.Image('CGIAR/SRTM90_V4')
# Define Earth Engine visualization parameters
vis_params = {
"min": 0,
"max": 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
}
# Create a pydeck EarthEngineLayer object, using the Earth Engine object and
# desired visualization parameters
ee_layer = EarthEngineLayer(image, vis_params)
# Define the initial viewport for the map
view_state = pdk.ViewState(
latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45
)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Adding multiple Earth Engine images
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Add Earth Engine dataset
image = ee.Image('USGS/SRTMGL1_003')
hillshade = ee.Terrain.hillshade(image)
demRGB = image.visualize(
**{
'min': 0,
'max': 4000,
'bands': ['elevation'],
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
'opacity': 0.5,
}
)
hillshadeRGB = hillshade.visualize(**{'bands': ['hillshade']})
blend = hillshadeRGB.blend(demRGB)
ee_layer = EarthEngineLayer(blend, {})
# Define the initial viewport for the map
view_state = pdk.ViewState(
latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45
)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.ImageCollection with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Initialize an ee.ImageColllection object referencing the Global Forecast System dataset
image_collection = ee.ImageCollection('NOAA/GFS0P25')
# Select images from December 22, 2018
image_collection = image_collection.filterDate('2018-12-22', '2018-12-23')
# Choose the first 24 images in the ImageCollection
image_collection = image_collection.limit(24)
# Select a single band to visualize
image_collection = image_collection.select('temperature_2m_above_ground')
# Style temperature values between -40C and 35C,
# with lower values shades of blue, purple, and cyan,
# and higher values shades of green, yellow, and red
vis_params = {
'min': -40.0,
'max': 35.0,
'palette': ['blue', 'purple', 'cyan', 'green', 'yellow', 'red'],
}
layer = EarthEngineLayer(
image_collection, vis_params, animate=True, id="global_weather"
)
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=1)
r = pdk.Deck(layers=[layer], initial_view_state=view_state)
# layer.visible = True
# layer.opacity = 0.2
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (points) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Load the FeatureCollection
table = ee.FeatureCollection("WRI/GPPD/power_plants")
# Create color palette
fuel_color = ee.Dictionary(
{
'Coal': '000000',
'Oil': '593704',
'Gas': 'BC80BD',
'Hydro': '0565A6',
'Nuclear': 'E31A1C',
'Solar': 'FF7F00',
'Waste': '6A3D9A',
'Wind': '5CA2D1',
'Geothermal': 'FDBF6F',
'Biomass': '229A00',
}
)
# List of fuels to add to the map
fuels = [
'Coal',
'Oil',
'Gas',
'Hydro',
'Nuclear',
'Solar',
'Waste',
'Wind',
'Geothermal',
'Biomass',
]
def add_style(point):
"""Computes size from capacity and color from fuel type.
Args:
- point: (ee.Geometry.Point) A Point
Returns:
(ee.Geometry.Point): Input point with added style dictionary
"""
size = ee.Number(point.get('capacitymw')).sqrt().divide(10).add(2)
color = fuel_color.get(point.get('fuel1'))
return point.set(
'styleProperty', ee.Dictionary({'pointSize': size, 'color': color})
)
# Make a FeatureCollection out of the power plant data table
pp = ee.FeatureCollection(table).map(add_style)
# Create a layer for each fuel type
layers = []
for fuel in fuels:
layer = EarthEngineLayer(
pp.filter(ee.Filter.eq('fuel1', fuel)).style(
styleProperty='styleProperty', neighborhood=50
),
id=fuel,
opacity=0.65,
)
layers.append(layer)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(layers=layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (lines) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Hurricane tracks and points for 2017.
hurricanes = ee.FeatureCollection('NOAA/NHC/HURDAT2/atlantic')
year = '2017'
points = hurricanes.filter(ee.Filter.date(ee.Date(year).getRange('year')))
# Find all of the hurricane ids.
def get_id(point):
return ee.Feature(point).get('id')
storm_ids = points.toList(1000).map(get_id).distinct()
# Create a line for each hurricane.
def create_line(storm_id):
pts = points.filter(ee.Filter.eq('id', ee.String(storm_id)))
pts = pts.sort('system:time_start')
line = ee.Geometry.LineString(pts.geometry().coordinates())
feature = ee.Feature(line)
return feature.set('id', storm_id)
lines = ee.FeatureCollection(storm_ids.map(create_line))
lines_layer = EarthEngineLayer(
lines,
{'color': 'red'},
id="tracks",
)
points_layer = EarthEngineLayer(
points,
{'color': 'green'},
id="points",
)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(layers=[points_layer, lines_layer], initial_view_state=view_state)
r.show()
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
dataset = ee.FeatureCollection('FAO/GAUL/2015/level0')
countries = dataset.style(fillColor='b5ffb4', color='00909F', width=3)
layer = EarthEngineLayer(countries, id="international_boundaries")
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=3)
r = pdk.Deck(layers=[layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap
###Output
_____no_output_____
###Markdown
How to use Earth Engine with pydeck for 3D visualization Requirements- [earthengine-api](https://github.com/google/earthengine-api): a Python client library for calling the Google Earth Engine API.- [pydeck](https://pydeck.gl/index.html): a WebGL-powered framework for visual exploratory data analysis of large datasets.- [pydeck-earthengine-layers](https://github.com/UnfoldedInc/earthengine-layers/tree/master/py): a pydekc wrapper for Google Earth Engine. For documentation please visit this [website](https://earthengine-layers.com/).- [Mapbox API key](https://pydeck.gl/installation.htmlgetting-a-mapbox-api-key): you will need this add basemap tiles to pydeck. Installation- conda create -n deck python- conda activate deck- conda install mamba -c conda-forge- mamba install earthengine-api pydeck pydeck-earthengine-layers -c conda-forge- jupyter nbextension install --sys-prefix --symlink --overwrite --py pydeck- jupyter nbextension enable --sys-prefix --py pydeck Using ee.Image with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Create an Earth Engine object
image = ee.Image('CGIAR/SRTM90_V4')
# Define Earth Engine visualization parameters
vis_params = {
"min": 0,
"max": 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']
}
# Create a pydeck EarthEngineLayer object, using the Earth Engine object and
# desired visualization parameters
ee_layer = EarthEngineLayer(image, vis_params)
# Define the initial viewport for the map
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Adding multiple Earth Engine images
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Add Earth Engine dataset
image = ee.Image('USGS/SRTMGL1_003')
hillshade = ee.Terrain.hillshade(image)
demRGB = image.visualize(**{
'min': 0,
'max': 4000,
'bands': ['elevation'],
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
'opacity': 0.5
})
hillshadeRGB = hillshade.visualize(**{'bands': ['hillshade']})
blend = hillshadeRGB.blend(demRGB)
ee_layer = EarthEngineLayer(blend, {})
# Define the initial viewport for the map
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.ImageCollection with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Initialize an ee.ImageColllection object referencing the Global Forecast System dataset
image_collection = ee.ImageCollection('NOAA/GFS0P25')
# Select images from December 22, 2018
image_collection = image_collection.filterDate('2018-12-22', '2018-12-23')
# Choose the first 24 images in the ImageCollection
image_collection = image_collection.limit(24)
# Select a single band to visualize
image_collection = image_collection.select('temperature_2m_above_ground')
# Style temperature values between -40C and 35C,
# with lower values shades of blue, purple, and cyan,
# and higher values shades of green, yellow, and red
vis_params = {
'min': -40.0,
'max': 35.0,
'palette': ['blue', 'purple', 'cyan', 'green', 'yellow', 'red']
};
layer = EarthEngineLayer(
image_collection,
vis_params,
animate=True,
id="global_weather")
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=1)
r = pdk.Deck(
layers=[layer],
initial_view_state=view_state
)
# layer.visible = True
# layer.opacity = 0.2
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (points) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Load the FeatureCollection
table = ee.FeatureCollection("WRI/GPPD/power_plants")
# Create color palette
fuel_color = ee.Dictionary({
'Coal': '000000',
'Oil': '593704',
'Gas': 'BC80BD',
'Hydro': '0565A6',
'Nuclear': 'E31A1C',
'Solar': 'FF7F00',
'Waste': '6A3D9A',
'Wind': '5CA2D1',
'Geothermal': 'FDBF6F',
'Biomass': '229A00'
})
# List of fuels to add to the map
fuels = ['Coal', 'Oil', 'Gas', 'Hydro', 'Nuclear', 'Solar', 'Waste', 'Wind', 'Geothermal', 'Biomass']
def add_style(point):
"""Computes size from capacity and color from fuel type.
Args:
- point: (ee.Geometry.Point) A Point
Returns:
(ee.Geometry.Point): Input point with added style dictionary
"""
size = ee.Number(point.get('capacitymw')).sqrt().divide(10).add(2)
color = fuel_color.get(point.get('fuel1'))
return point.set('styleProperty', ee.Dictionary({'pointSize': size, 'color': color}))
# Make a FeatureCollection out of the power plant data table
pp = ee.FeatureCollection(table).map(add_style)
# Create a layer for each fuel type
layers = []
for fuel in fuels:
layer = EarthEngineLayer(
pp.filter(ee.Filter.eq('fuel1', fuel)).style(styleProperty='styleProperty', neighborhood=50),
id=fuel,
opacity=0.65,
)
layers.append(layer)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(
layers=layers,
initial_view_state=view_state
)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (lines) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Hurricane tracks and points for 2017.
hurricanes = ee.FeatureCollection('NOAA/NHC/HURDAT2/atlantic')
year = '2017'
points = hurricanes.filter(ee.Filter.date(ee.Date(year).getRange('year')))
# Find all of the hurricane ids.
def get_id(point):
return ee.Feature(point).get('id')
storm_ids = points.toList(1000).map(get_id).distinct()
# Create a line for each hurricane.
def create_line(storm_id):
pts = points.filter(ee.Filter.eq('id', ee.String(storm_id)))
pts = pts.sort('system:time_start')
line = ee.Geometry.LineString(pts.geometry().coordinates())
feature = ee.Feature(line)
return feature.set('id', storm_id)
lines = ee.FeatureCollection(storm_ids.map(create_line))
lines_layer = EarthEngineLayer(
lines,
{'color': 'red'},
id="tracks",
)
points_layer = EarthEngineLayer(
points,
{'color': 'green'},
id="points",
)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(
layers=[points_layer, lines_layer],
initial_view_state=view_state
)
r.show()
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
dataset = ee.FeatureCollection('FAO/GAUL/2015/level0')
countries = dataset.style(
fillColor='b5ffb4',
color='00909F',
width=3
)
layer = EarthEngineLayer(countries, id="international_boundaries")
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=3)
r = pdk.Deck(
layers=[layer],
initial_view_state=view_state
)
r.show()
###Output
_____no_output_____
###Markdown
[](https://githubtocolab.com/giswqs/leafmap/blob/master/examples/notebooks/29_pydeck.ipynb)[](https://gishub.org/leafmap-pangeo)Uncomment the following line to install [leafmap](https://leafmap.org) if needed.
###Code
# !pip install leafmap
import leafmap.deck as leafmap
###Output
_____no_output_____
###Markdown
If you are using a recently implemented leafmap feature that has not yet been released to PyPI or conda-forge, you can uncomment the following line to install the development version from GitHub.
###Code
# leafmap.update_package()
###Output
_____no_output_____
###Markdown
Create an interactive map.
###Code
m = leafmap.Map(center=(40,-100), zoom=3)
m
###Output
_____no_output_____
###Markdown
Add basemap.
###Code
m = leafmap.Map()
m.add_basemap("HYBRID")
m
###Output
_____no_output_____
###Markdown
Add vector data to the map. It supports any GeoPandas supported format, such as GeoJSON, shapefile, KML.
###Code
m = leafmap.Map()
filename = "https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_states.geojson"
m.add_vector(filename, random_color_column="STATEFP")
m
###Output
_____no_output_____
###Markdown
Add a GeoPandas GeoDataFrame to the map.
###Code
import geopandas as gpd
url = "https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_counties.geojson"
gdf = gpd.read_file(url)
m = leafmap.Map()
m.add_gdf(gdf, random_color_column="STATEFP")
m
###Output
_____no_output_____
###Markdown
Create a 3D view of the map. **Press Ctrl and hold down the left mouse button to rotate the 3D view.**
###Code
initial_view_state={"latitude": 40, "longitude": -100, "zoom": 3, "pitch": 45, "bearing": 10}
m = leafmap.Map(initial_view_state=initial_view_state)
filename = "https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_states.geojson"
m.add_vector(filename, random_color_column="STATEFP", extruded=True, get_elevation="ALAND", elevation_scale=0.000001)
m
###Output
_____no_output_____
###Markdown
How to use Earth Engine with pydeck for 3D visualization Requirements- [earthengine-api](https://github.com/google/earthengine-api): a Python client library for calling the Google Earth Engine API.- [pydeck](https://pydeck.gl/index.html): a WebGL-powered framework for visual exploratory data analysis of large datasets.- [pydeck-earthengine-layers](https://github.com/UnfoldedInc/earthengine-layers/tree/master/py): a pydekc wrapper for Google Earth Engine. For documentation please visit this [website](https://earthengine-layers.com/).- [Mapbox API key](https://pydeck.gl/installation.htmlgetting-a-mapbox-api-key): you will need this add basemap tiles to pydeck. Installation- conda create -n deck python- conda activate deck- conda install mamba -c conda-forge- mamba install earthengine-api pydeck pydeck-earthengine-layers -c conda-forge- jupyter nbextension install --sys-prefix --symlink --overwrite --py pydeck- jupyter nbextension enable --sys-prefix --py pydeck Using ee.Image with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Create an Earth Engine object
image = ee.Image('CGIAR/SRTM90_V4')
# Define Earth Engine visualization parameters
vis_params = {
"min": 0,
"max": 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']
}
# Create a pydeck EarthEngineLayer object, using the Earth Engine object and
# desired visualization parameters
ee_layer = EarthEngineLayer(image, vis_params)
# Define the initial viewport for the map
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Adding multiple Earth Engine images
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Add Earth Engine dataset
image = ee.Image('USGS/SRTMGL1_003')
hillshade = ee.Terrain.hillshade(image)
demRGB = image.visualize(**{
'min': 0,
'max': 4000,
'bands': ['elevation'],
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
'opacity': 0.5
})
hillshadeRGB = hillshade.visualize(**{'bands': ['hillshade']})
blend = hillshadeRGB.blend(demRGB)
ee_layer = EarthEngineLayer(blend, {})
# Define the initial viewport for the map
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.ImageCollection with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Initialize an ee.ImageColllection object referencing the Global Forecast System dataset
image_collection = ee.ImageCollection('NOAA/GFS0P25')
# Select images from December 22, 2018
image_collection = image_collection.filterDate('2018-12-22', '2018-12-23')
# Choose the first 24 images in the ImageCollection
image_collection = image_collection.limit(24)
# Select a single band to visualize
image_collection = image_collection.select('temperature_2m_above_ground')
# Style temperature values between -40C and 35C,
# with lower values shades of blue, purple, and cyan,
# and higher values shades of green, yellow, and red
vis_params = {
'min': -40.0,
'max': 35.0,
'palette': ['blue', 'purple', 'cyan', 'green', 'yellow', 'red']
};
layer = EarthEngineLayer(
image_collection,
vis_params,
animate=True,
id="global_weather")
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=1)
r = pdk.Deck(
layers=[layer],
initial_view_state=view_state
)
# layer.visible = True
# layer.opacity = 0.2
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (points) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Load the FeatureCollection
table = ee.FeatureCollection("WRI/GPPD/power_plants")
# Create color palette
fuel_color = ee.Dictionary({
'Coal': '000000',
'Oil': '593704',
'Gas': 'BC80BD',
'Hydro': '0565A6',
'Nuclear': 'E31A1C',
'Solar': 'FF7F00',
'Waste': '6A3D9A',
'Wind': '5CA2D1',
'Geothermal': 'FDBF6F',
'Biomass': '229A00'
})
# List of fuels to add to the map
fuels = ['Coal', 'Oil', 'Gas', 'Hydro', 'Nuclear', 'Solar', 'Waste', 'Wind', 'Geothermal', 'Biomass']
def add_style(point):
"""Computes size from capacity and color from fuel type.
Args:
- point: (ee.Geometry.Point) A Point
Returns:
(ee.Geometry.Point): Input point with added style dictionary
"""
size = ee.Number(point.get('capacitymw')).sqrt().divide(10).add(2)
color = fuel_color.get(point.get('fuel1'))
return point.set('styleProperty', ee.Dictionary({'pointSize': size, 'color': color}))
# Make a FeatureCollection out of the power plant data table
pp = ee.FeatureCollection(table).map(add_style)
# Create a layer for each fuel type
layers = []
for fuel in fuels:
layer = EarthEngineLayer(
pp.filter(ee.Filter.eq('fuel1', fuel)).style(styleProperty='styleProperty', neighborhood=50),
id=fuel,
opacity=0.65,
)
layers.append(layer)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(
layers=layers,
initial_view_state=view_state
)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (lines) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Hurricane tracks and points for 2017.
hurricanes = ee.FeatureCollection('NOAA/NHC/HURDAT2/atlantic')
year = '2017'
points = hurricanes.filter(ee.Filter.date(ee.Date(year).getRange('year')))
# Find all of the hurricane ids.
def get_id(point):
return ee.Feature(point).get('id')
storm_ids = points.toList(1000).map(get_id).distinct()
# Create a line for each hurricane.
def create_line(storm_id):
pts = points.filter(ee.Filter.eq('id', ee.String(storm_id)))
pts = pts.sort('system:time_start')
line = ee.Geometry.LineString(pts.geometry().coordinates())
feature = ee.Feature(line)
return feature.set('id', storm_id)
lines = ee.FeatureCollection(storm_ids.map(create_line))
lines_layer = EarthEngineLayer(
lines,
{'color': 'red'},
id="tracks",
)
points_layer = EarthEngineLayer(
points,
{'color': 'green'},
id="points",
)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(
layers=[points_layer, lines_layer],
initial_view_state=view_state
)
r.show()
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
dataset = ee.FeatureCollection('FAO/GAUL/2015/level0')
countries = dataset.style(
fillColor='b5ffb4',
color='00909F',
width=3
)
layer = EarthEngineLayer(countries, id="international_boundaries")
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=3)
r = pdk.Deck(
layers=[layer],
initial_view_state=view_state
)
r.show()
###Output
_____no_output_____
###Markdown
[](https://githubtocolab.com/giswqs/leafmap/blob/master/examples/notebooks/29_pydeck.ipynb)[](https://gishub.org/leafmap-binder)Uncomment the following line to install [leafmap](https://leafmap.org) if needed.
###Code
# !pip install leafmap
import leafmap.deck as leafmap
###Output
_____no_output_____
###Markdown
Create an interactive map.
###Code
m = leafmap.Map(center=(40, -100), zoom=3)
m
###Output
_____no_output_____
###Markdown
Add basemap.
###Code
m = leafmap.Map()
m.add_basemap("HYBRID")
m
###Output
_____no_output_____
###Markdown
Add vector data to the map. It supports any GeoPandas supported format, such as GeoJSON, shapefile, KML.
###Code
m = leafmap.Map()
filename = (
"https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_states.geojson"
)
m.add_vector(filename, random_color_column="STATEFP")
m
###Output
_____no_output_____
###Markdown
Add a GeoPandas GeoDataFrame to the map.
###Code
import geopandas as gpd
url = (
"https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_counties.geojson"
)
gdf = gpd.read_file(url)
m = leafmap.Map()
m.add_gdf(gdf, random_color_column="STATEFP")
m
###Output
_____no_output_____
###Markdown
Create a 3D view of the map. **Press Ctrl and hold down the left mouse button to rotate the 3D view.**
###Code
initial_view_state = {
"latitude": 40,
"longitude": -100,
"zoom": 3,
"pitch": 45,
"bearing": 10,
}
m = leafmap.Map(initial_view_state=initial_view_state)
filename = (
"https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_states.geojson"
)
m.add_vector(
filename,
random_color_column="STATEFP",
extruded=True,
get_elevation="ALAND",
elevation_scale=0.000001,
)
m
###Output
_____no_output_____
###Markdown
[](https://githubtocolab.com/giswqs/leafmap/blob/master/examples/notebooks/29_pydeck.ipynb)[](https://gishub.org/leafmap-binder)Uncomment the following line to install [leafmap](https://leafmap.org) if needed.
###Code
# !pip install leafmap
import leafmap.deck as leafmap
###Output
_____no_output_____
###Markdown
If you are using a recently implemented leafmap feature that has not yet been released to PyPI or conda-forge, you can uncomment the following line to install the development version from GitHub.
###Code
# leafmap.update_package()
###Output
_____no_output_____
###Markdown
Create an interactive map.
###Code
m = leafmap.Map(center=(40,-100), zoom=3)
m
###Output
_____no_output_____
###Markdown
Add basemap.
###Code
m = leafmap.Map()
m.add_basemap("HYBRID")
m
###Output
_____no_output_____
###Markdown
Add vector data to the map. It supports any GeoPandas supported format, such as GeoJSON, shapefile, KML.
###Code
m = leafmap.Map()
filename = "https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_states.geojson"
m.add_vector(filename, random_color_column="STATEFP")
m
###Output
_____no_output_____
###Markdown
Add a GeoPandas GeoDataFrame to the map.
###Code
import geopandas as gpd
url = "https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_counties.geojson"
gdf = gpd.read_file(url)
m = leafmap.Map()
m.add_gdf(gdf, random_color_column="STATEFP")
m
###Output
_____no_output_____
###Markdown
Create a 3D view of the map. **Press Ctrl and hold down the left mouse button to rotate the 3D view.**
###Code
initial_view_state={"latitude": 40, "longitude": -100, "zoom": 3, "pitch": 45, "bearing": 10}
m = leafmap.Map(initial_view_state=initial_view_state)
filename = "https://github.com/giswqs/streamlit-geospatial/raw/master/data/us_states.geojson"
m.add_vector(filename, random_color_column="STATEFP", extruded=True, get_elevation="ALAND", elevation_scale=0.000001)
m
###Output
_____no_output_____
###Markdown
How to use Earth Engine with pydeck for 3D visualization Requirements- [earthengine-api](https://github.com/google/earthengine-api): a Python client library for calling the Google Earth Engine API.- [pydeck](https://pydeck.gl/index.html): a WebGL-powered framework for visual exploratory data analysis of large datasets.- [pydeck-earthengine-layers](https://github.com/UnfoldedInc/earthengine-layers/tree/master/py): a pydekc wrapper for Google Earth Engine. For documentation please visit this [website](https://earthengine-layers.com/).- [Mapbox API key](https://pydeck.gl/installation.htmlgetting-a-mapbox-api-key): you will need this add basemap tiles to pydeck. Installation- conda create -n deck python- conda activate deck- conda install mamba -c conda-forge- mamba install earthengine-api pydeck pydeck-earthengine-layers -c conda-forge- jupyter nbextension install --sys-prefix --symlink --overwrite --py pydeck- jupyter nbextension enable --sys-prefix --py pydeck Using ee.Image with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Create an Earth Engine object
image = ee.Image('CGIAR/SRTM90_V4')
# Define Earth Engine visualization parameters
vis_params = {
"min": 0,
"max": 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']
}
# Create a pydeck EarthEngineLayer object, using the Earth Engine object and
# desired visualization parameters
ee_layer = EarthEngineLayer(image, vis_params)
# Define the initial viewport for the map
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Adding multiple Earth Engine images
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Add Earth Engine dataset
image = ee.Image('USGS/SRTMGL1_003')
hillshade = ee.Terrain.hillshade(image)
demRGB = image.visualize(**{
'min': 0,
'max': 4000,
'bands': ['elevation'],
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
'opacity': 0.5
})
hillshadeRGB = hillshade.visualize(**{'bands': ['hillshade']})
blend = hillshadeRGB.blend(demRGB)
ee_layer = EarthEngineLayer(blend, {})
# Define the initial viewport for the map
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# Create a Deck instance, and display in Jupyter
r = pdk.Deck(layers=[ee_layer], initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.ImageCollection with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
# Initialize Earth Engine library
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Initialize an ee.ImageColllection object referencing the Global Forecast System dataset
image_collection = ee.ImageCollection('NOAA/GFS0P25')
# Select images from December 22, 2018
image_collection = image_collection.filterDate('2018-12-22', '2018-12-23')
# Choose the first 24 images in the ImageCollection
image_collection = image_collection.limit(24)
# Select a single band to visualize
image_collection = image_collection.select('temperature_2m_above_ground')
# Style temperature values between -40C and 35C,
# with lower values shades of blue, purple, and cyan,
# and higher values shades of green, yellow, and red
vis_params = {
'min': -40.0,
'max': 35.0,
'palette': ['blue', 'purple', 'cyan', 'green', 'yellow', 'red']
};
layer = EarthEngineLayer(
image_collection,
vis_params,
animate=True,
id="global_weather")
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=1)
r = pdk.Deck(
layers=[layer],
initial_view_state=view_state
)
# layer.visible = True
# layer.opacity = 0.2
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (points) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Load the FeatureCollection
table = ee.FeatureCollection("WRI/GPPD/power_plants")
# Create color palette
fuel_color = ee.Dictionary({
'Coal': '000000',
'Oil': '593704',
'Gas': 'BC80BD',
'Hydro': '0565A6',
'Nuclear': 'E31A1C',
'Solar': 'FF7F00',
'Waste': '6A3D9A',
'Wind': '5CA2D1',
'Geothermal': 'FDBF6F',
'Biomass': '229A00'
})
# List of fuels to add to the map
fuels = ['Coal', 'Oil', 'Gas', 'Hydro', 'Nuclear', 'Solar', 'Waste', 'Wind', 'Geothermal', 'Biomass']
def add_style(point):
"""Computes size from capacity and color from fuel type.
Args:
- point: (ee.Geometry.Point) A Point
Returns:
(ee.Geometry.Point): Input point with added style dictionary
"""
size = ee.Number(point.get('capacitymw')).sqrt().divide(10).add(2)
color = fuel_color.get(point.get('fuel1'))
return point.set('styleProperty', ee.Dictionary({'pointSize': size, 'color': color}))
# Make a FeatureCollection out of the power plant data table
pp = ee.FeatureCollection(table).map(add_style)
# Create a layer for each fuel type
layers = []
for fuel in fuels:
layer = EarthEngineLayer(
pp.filter(ee.Filter.eq('fuel1', fuel)).style(styleProperty='styleProperty', neighborhood=50),
id=fuel,
opacity=0.65,
)
layers.append(layer)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(
layers=layers,
initial_view_state=view_state
)
r.show()
###Output
_____no_output_____
###Markdown
Using ee.FeatureCollection (lines) with pydeck
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# Hurricane tracks and points for 2017.
hurricanes = ee.FeatureCollection('NOAA/NHC/HURDAT2/atlantic')
year = '2017'
points = hurricanes.filter(ee.Filter.date(ee.Date(year).getRange('year')))
# Find all of the hurricane ids.
def get_id(point):
return ee.Feature(point).get('id')
storm_ids = points.toList(1000).map(get_id).distinct()
# Create a line for each hurricane.
def create_line(storm_id):
pts = points.filter(ee.Filter.eq('id', ee.String(storm_id)))
pts = pts.sort('system:time_start')
line = ee.Geometry.LineString(pts.geometry().coordinates())
feature = ee.Feature(line)
return feature.set('id', storm_id)
lines = ee.FeatureCollection(storm_ids.map(create_line))
lines_layer = EarthEngineLayer(
lines,
{'color': 'red'},
id="tracks",
)
points_layer = EarthEngineLayer(
points,
{'color': 'green'},
id="points",
)
view_state = pdk.ViewState(latitude=36, longitude=-53, zoom=3)
r = pdk.Deck(
layers=[points_layer, lines_layer],
initial_view_state=view_state
)
r.show()
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
dataset = ee.FeatureCollection('FAO/GAUL/2015/level0')
countries = dataset.style(
fillColor='b5ffb4',
color='00909F',
width=3
)
layer = EarthEngineLayer(countries, id="international_boundaries")
view_state = pdk.ViewState(latitude=36, longitude=10, zoom=3)
r = pdk.Deck(
layers=[layer],
initial_view_state=view_state
)
r.show()
###Output
_____no_output_____ |
algorithm_analysis/sum_vector.ipynb | ###Markdown
Suma de los elementos de un conjunto RequerimientosDado un conjunto $c$ de $n$ elementos enteros $[e_1, \cdots e_n]$, obtener la suma de todos los elementos, es decir, $suma = e_1 + \cdots + e_n$. Análisis**Entrada:** un conjunto $c$ de $n$ elementos enteros $[e_1, \cdots e_n]$.**Salida:** un número entero $suma$ que contenga la suma de todos los $n$ elementos del conjunto $c$. Diseño~~~FUNCTION sumar_conjunto (INTEGER: c[]) RETURN INTEGER: INTEGER i = 0, suma = 0 c1 while i < len(c): c2 * (n + 1) suma = suma + c[i] c3 * n i = i+1 c4 * n return suma c5END FUNCTION~~~f(n) = c1 + c2 * (n + 1) + c3 * n + c4 * n + c5f(n) = (c2+c3+c4)*n + (c1+c2+c5)f(n) = a*n + bf(n) = O(n)
###Code
'''
input: un conjunto $c$ de $n$ elementos enteros $[e_1, \cdots e_n]$.
output: un número entero $suma$ que contenga la suma de todos los $n$ elementos del conjunto $c$.
descripción: sumar todos los elementos del conjunto $c$ y regresar el valor de la suma
'''
def vector_sum(conjunto):
i = 0
suma = 0
while i < len(conjunto):
suma += conjunto[i]
i += 1
return suma
c = [2, 3, 6, 9, 10, 22]
print(c)
print('La suma del conjunto es: ', vector_sum(c))
%pylab inline
import matplotlib.pyplot as plt
import random
def vector_sum_graph(conjunto):
times = 0
i = 0
suma = 0
while i < len(conjunto):
times += 1
suma += conjunto[i]
i += 1
return times
x = []
y = []
c = []
for i in range(1, 1001):
c.append(random.randint(1, 25000))
x.append(i)
y.append(vector_sum_graph(c))
#print(x)
#print(y)
fig, ax = plt.subplots(facecolor='w', edgecolor='k')
ax.plot(x, y, marker="o",color="b", linestyle='None')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.grid(True)
ax.legend(["Sum"])
plt.title('Vector sum')
plt.show()
###Output
_____no_output_____ |
docs/Rabi Experiment.ipynb | ###Markdown
Running a Remote Rabi Experiment The notebook below shows how to run a Ramsey experiment using the TopChef client API. Add this project's base directory to the path. Make sure that the TopChef client is available and importable.
###Code
import sys
import os
sys.path.append(os.path.abspath('../../..'))
###Output
_____no_output_____
###Markdown
Import the required modules to run the experiment
###Code
from topchef_client import Client
from time import sleep
from datetime import datetime
###Output
_____no_output_____
###Markdown
Define a small subroutine to wait until a job is done
###Code
def wait_until_job_done(job):
"""
A small subroutine that waits until the job is finished. This can be
replaced with something more advanced, and is dependent on the
implementation. The onus is on the consumer of the TopChef API to
only get results after the job finishes
"""
while not job.is_complete:
sleep(0.01)
###Output
_____no_output_____
###Markdown
Define the experiment Parameters
###Code
experiment_parameters = {
'pulse_time': 500e-9,
'number_of_repetitions': 50000,
'type': 'RABI'
}
server_url = 'http://129.97.136.225:5000'
service_id = '1b98639a-6276-11e7-b3a3-0242ac110002'
client = Client(server_url)
service = client.services[service_id]
start = datetime.utcnow()
job = service.new_job(experiment_parameters)
wait_until_job_done(job)
print(job.result)
finish = datetime.utcnow()
print((finish - start).total_seconds())
###Output
4.363952
|
main/Sapphire/Tutorials/Temperature_Average/Tutorial.ipynb | ###Markdown
Using the building tools in Sapphire, we are able to construct a small Au icosahedron of 561 atoms.We may now proceed to perform some targetted alanysis on this object.
###Code
from Sapphire.Post_Process import Adjacent,Kernels,DistFuncs #Some basic tools geometric analysis
from ase.io import read #Whilst this is a part of the core Sapphire setup, we can demonstrate its uitility more easily here
au = read('Au561.xyz')
pos = au.positions
dist = DistFuncs.Euc_Dist(pos) #Get a list of pairwise distances.
#Now why don't we visualise the pair distances as a distribution?
import matplotlib.pyplot as plt
a,b,c = plt.hist(dist, density=True, bins = 200)
###Output
_____no_output_____
###Markdown
This does not look particularly helpful, and so we shall make use of Sapphire's Kernel Density Estimators (KDEs) to get a smoother approximation on the pair distance distribution function.We primarily advocate for the use of the Gaussian kernels, though others are supported.A bandwidth of 0.05 should be adequate to acquire a good balance between smoothness and detail in the resultant distribution.As a default, Sapphire only considers the first 6 Å of pair distances for the sake of speed when computing the full KDE. However, this is something which is easily varied, as explored in the PDDF tutorial.
###Code
K = Kernels.Gauss(dist, 0.05)
fig,ax = plt.subplots()
ax.plot(K.Space, K.Density, color = 'k', label = 'Gaussian KDE')
ax1 = ax.twinx()
a,b,c = ax1.hist(dist, density=True, bins = 200, color = 'r', label = 'Raw Histogram')
plt.xlim(0,6)
ax.set_xlabel('Pair distance (Å)')
ax.set_ylabel('Freq arb. un.')
fig.legend()
###Output
_____no_output_____
###Markdown
As we can see, there is a distinct advantage in using the KDE method to compute the pair distance distribution function over the raw histogram. Namely, we are able to extract information more easily, and being an analytic function, taking derivatives to find minima is simple!Next, we compute the adjacency matrix which is stored as a sparse scipy matrix to save on memory overheads. This simply requires the atomic positions, the pair distances we computed earlier, and the first minimum of the pair distance distribution function.
###Code
A = Adjacent.Adjacency_Matrix(Positions=pos, Distances=dist, R_Cut=K.R_Cut)
adj = A.ReturnAdj()
###Output
_____no_output_____
###Markdown
We shall introduce the CNA function by running through the main workflow explicitly before hiding much of the machinary behind the curtain of python.
###Code
from Sapphire.CNA.FrameSignature import CNA
Sigs = {} #We shall not presupose the existence of any CNA signatures.
for i, atom in enumerate(au): #Iterate over all atoms in the cluster
cna = CNA(adj = adj)
cna.NN(i) #Aquire the nearest neighbours to the reference atom being considered
for neigh in cna.neigh:
sig = tuple((cna.R(i,neigh), cna.S(), cna.T()))
try:
Sigs[sig]+=1
except KeyError:
print(sig)
Sigs[sig] = 1
###Output
(5, 5, 5)
(4, 2, 2)
(4, 2, 1)
(3, 2, 2)
(3, 1, 1)
###Markdown
As we can see, there are 5 CNA signatures associated with the Ih structure which have been identified. This is indeed as it should be. 555 - indicative of a 5-fold symmetry axis.422 & 421 - Indicative of FCC environment322 & 321 are generally representative of surfaces with (111) Miller indices. This shall be expanded on later in the tutorial.With our signatures collected, we may now evaluate their distribution.
###Code
import numpy as np
xaxis = np.arange(0,len(Sigs)) #Define the range for the x axis
yaxis = np.array([x for x in Sigs.values()]) #Set the y axis values from the dictionary
xlabels = [ str(x) for x in Sigs.keys() ] #Create a list of labels for the bar chart
plt.bar(xaxis,yaxis)
plt.xticks(xaxis,xlabels, rotation = 45)
plt.xlabel('CNA signature')
plt.ylabel('Counts')
###Output
_____no_output_____
###Markdown
Now that we have a good feeling for what this distribution may feel like, we can begin to evaluate what the bonding environment itself may actually look like.As such, we provide below two example functions to extract the positional information, given the cna signature, for the atoms and their evaluated bonds.We may sort the bonds into 3 lists representing the following quantities: 1. pair_edge = The bonding pair in question. 2. s_edge = All of the bonds shared between the reference pair and their shared neighbours. 3. t_edge = All of the bonds shared by the shared neighbours only. We sort these three lissts manually so that we may have an easier time distinguishing environments.We shall then extract five pairs, each of which have a unique cna signature, and plot graphically their bonding environments.
###Code
def plotting_tool(cna=None, strut = None, reference = None, friend = None):
node_xyz = np.zeros((2+len(cna.bonds), 3))
node_xyz[0] = strut[int(reference)].position
node_xyz[1] = strut[int(friend)].position
for i,node in enumerate(cna.bonds):
node_xyz[i+2] = strut[int(node)].position
pair_edge = np.zeros((2,3))
s_edge = np.zeros((2*len(cna.bonds),2,3))
t_edge = np.zeros((len(cna.perm),2,3))
pair_edge[0] = node_xyz[0]
pair_edge[1] = node_xyz[1]
for i, bond in enumerate(cna.bonds):
s_edge[2*i][0] = au[int(reference)].position
s_edge[2*i+1][0] = au[int(friend)].position
s_edge[2*i][1] = au[int(bond)].position
s_edge[2*i+1][1] = au[int(bond)].position
for i, bond in enumerate(cna.perm):
t_edge[i][0] = au[int(bond[0])].position
t_edge[i][1] = au[int(bond[1])].position
return node_xyz, pair_edge, s_edge, t_edge
def sig_plot(nodes, pair_edge, s_edges, t_edges, angle_A = 30, angle_B = 210):
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
# Plot the nodes - alpha is scaled by "depth" automatically
options = {"edgecolors": "tab:gray", "node_size": 200, "alpha": 0.9}
for i in [0,1]:
ax.scatter(*nodes[i].T, s=400, ec="w", color = 'r')
for i, node in enumerate(nodes):
if i >1:
ax.scatter(*nodes[i].T, s=400, ec="w", color = 'k')
ax.plot(*pair_edge.T, color="r", linewidth = 10)
for vizedge in s_edges:
ax.plot(*vizedge.T, color="k")
for vizedge in t_edges:
ax.plot(*vizedge.T, color="g")
def _format_axes(ax):
"""Visualization options for the 3D axes."""
# Turn gridlines off
ax.grid(False)
# Suppress tick labels
# Set axes labels
ax.set_xlabel("x (Å)")
ax.set_ylabel("y (Å)")
ax.set_zlabel("z (Å)")
ax.view_init(angle_A, angle_B)
_format_axes(ax)
#fig.tight_layout()
plt.show()
cna = CNA(adj = adj)
cna_555 = cna
cna_555.NN(0)
print(cna_555.neigh)
cna_555.R(0,1)
cna_555.S()
cna_555.T()
a,b,c,d = plotting_tool(cna_555,au,0,1)
sig_plot(a,b,c,d)
cna = CNA(adj = adj)
cna_422= cna
cna_422.NN(10)
print(cna_422.neigh)
r = cna_422.R(10,26)
s = cna_422.S()
t = cna_422.T()
print(tuple((r,s,t)))
a,b,c,d = plotting_tool(cna_422,au,10,26)
sig_plot(a,b,c,d, 15, 180)
cna = CNA(adj = adj)
cna_421 = cna
cna_421.NN(41)
print(cna_421.neigh)
r = cna_421.R(41,51)
s = cna_421.S()
t = cna_421.T()
print(tuple((r,s,t)))
a,b,c,d = plotting_tool(cna_421,au,0,1)
sig_plot(a,b,c,d,45,0)
cna = CNA(adj = adj)
cna_322 = cna
cna_322.NN(500)
print(cna_322.neigh)
r = cna_322.R(500,501)
s = cna_322.S()
t = cna_322.T()
print(tuple((r,s,t)))
a,b,c,d = plotting_tool(cna_322,au,0,1)
sig_plot(a,b,c,d,135,135)
cna = CNA(adj = adj, Fingerprint = True)
cna_311 = cna
cna_311.NN(560)
print(cna_311.neigh)
r = cna_311.R(560,558)
s = cna_311.S()
t = cna_311.T()
print(tuple((r,s,t)))
a,b,c,d = plotting_tool(cna_311,au,560,558)
sig_plot(a,b,c,d, 60, 90)
cna = CNA(adj = adj, Fingerprint = True)
cna.calculate()
pattern_dict = list(set(cna.Fingerprint))
y_axis = np.zeros(len(pattern_dict), int)
x_axis = np.arange(0, len(pattern_dict))
xlabels = [ str(x) for x in pattern_dict ] #Create a list of labels for the bar chart
for pat in cna.Fingerprint:
y_axis[pattern_dict.index(pat)] += 1
plt.barh(x_axis,y_axis)
len(xlabels)
plt.yticks(x_axis,xlabels, rotation = 0)
plt.xlabel('Counts')
plt.ylabel('Pattern')
cna = CNA(adj = adj, Fingerprint = True)
cna.calculate()
cna.Fingerprint
fig = plt.figure()
gs = fig.add_gridspec(1,2, hspace=0, wspace = 0)
axs = gs.subplots(sharex=True, sharey=True)
ax1,ax2 = axs.flatten()
cna = CNA(adj = adj, Fingerprint = True)
cna.NN(560)
r = cna_311.R(560,558)
s = cna_311.S()
t = cna_311.T()
print(tuple((r,s,t)))
a,b,c,d = plotting_tool(cna_311,au,560,558)
ax1 = sig_plot(a,b,c,d, 60, 90)
cna = CNA(adj = adj, Fingerprint = True)
cna.NN(560)
for nn in cna.neigh:
print(nn)
r = cna.R(560,nn)
s = cna.S()
t = cna.T()
print(tuple((r,s,t)))
#,b,c,d = plotting_tool(cna,au,560,558)
#ig_plot(a,b,c,d, 60, 90)
###Output
212
(4, 2, 1)
267
(4, 2, 1)
308
(4, 2, 1)
398
(3, 1, 1)
400
(3, 1, 1)
485
(3, 1, 1)
486
(3, 1, 1)
556
(3, 1, 1)
558
(3, 1, 1)
|
Model backlog/Train/25-tweet-train-distilbert-max-len-160.ipynb | ###Markdown
Dependencies
###Code
import json
from tweet_utility_scripts import *
from transformers import TFDistilBertModel, DistilBertConfig
from tokenizers import BertWordPieceTokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard
from tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, GlobalMaxPooling1D, Concatenate
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/tweet-dataset-split-distilbert-uncased-160/'
hold_out = pd.read_csv(database_base_path + 'hold-out.csv')
train = hold_out[hold_out['set'] == 'train']
validation = hold_out[hold_out['set'] == 'validation']
display(hold_out.head())
# Unzip files
!tar -xvf /kaggle/input/tweet-dataset-split-distilbert-uncased-160/hold_out.tar.gz
base_data_path = 'hold_out/'
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
# Delete data dir
shutil.rmtree(base_data_path)
###Output
_____no_output_____
###Markdown
Model parameters
###Code
tokenizer_path = database_base_path + 'vocab.txt'
base_path = '/kaggle/input/qa-transformers/distilbert/'
model_path = 'model.h5'
config = {
"MAX_LEN": 160,
"BATCH_SIZE": 64,
"EPOCHS": 20,
"LEARNING_RATE": 1e-5,
"ES_PATIENCE": 3,
"question_size": 3,
"base_model_path": base_path + 'distilbert-base-uncased-distilled-squad-tf_model.h5',
"config_path": base_path + 'distilbert-base-uncased-distilled-squad-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
###Output
_____no_output_____
###Markdown
Model
###Code
module_config = DistilBertConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
token_type_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='token_type_ids')
base_model = TFDistilBertModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids})
last_state = sequence_output[0]
x = GlobalAveragePooling1D()(last_state)
y_start = Dense(MAX_LEN, activation='softmax', name='y_start')(x)
y_end = Dense(MAX_LEN, activation='softmax', name='y_end')(x)
model = Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[y_start, y_end])
model.compile(optimizers.Adam(lr=config['LEARNING_RATE']),
loss=losses.CategoricalCrossentropy(),
metrics=[metrics.CategoricalAccuracy()])
return model
model = model_fn(config['MAX_LEN'])
model.summary()
###Output
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
attention_mask (InputLayer) [(None, 160)] 0
__________________________________________________________________________________________________
input_ids (InputLayer) [(None, 160)] 0
__________________________________________________________________________________________________
token_type_ids (InputLayer) [(None, 160)] 0
__________________________________________________________________________________________________
base_model (TFDistilBertModel) ((None, 160, 768),) 66362880 attention_mask[0][0]
input_ids[0][0]
token_type_ids[0][0]
__________________________________________________________________________________________________
global_average_pooling1d (Globa (None, 768) 0 base_model[0][0]
__________________________________________________________________________________________________
y_start (Dense) (None, 160) 123040 global_average_pooling1d[0][0]
__________________________________________________________________________________________________
y_end (Dense) (None, 160) 123040 global_average_pooling1d[0][0]
==================================================================================================
Total params: 66,608,960
Trainable params: 66,608,960
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Train
###Code
tb_callback = TensorBoard(log_dir='./')
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
history = model.fit(list(x_train), list(y_train),
validation_data=(list(x_valid), list(y_valid)),
callbacks=[es, tb_callback],
epochs=config['EPOCHS'],
batch_size=config['BATCH_SIZE'],
verbose=2).history
model.save_weights(model_path)
# Compress logs dir
!tar -cvzf train.tar.gz train
!tar -cvzf validation.tar.gz validation
# Delete logs dir
if os.path.exists('/kaggle/working/train/'):
shutil.rmtree('/kaggle/working/train/')
if os.path.exists('/kaggle/working/validation/'):
shutil.rmtree('/kaggle/working/validation/')
###Output
train/
train/events.out.tfevents.1586614134.4128e360aec4.12.5418.v2
train/plugins/
train/plugins/profile/
train/plugins/profile/2020-04-11_14-09-04/
train/plugins/profile/2020-04-11_14-09-04/local.trace
train/events.out.tfevents.1586614144.4128e360aec4.profile-empty
validation/
validation/events.out.tfevents.1586614354.4128e360aec4.12.24091.v2
###Markdown
Model loss graph
###Code
sns.set(style="whitegrid")
plot_metrics(history, metric_list=['loss', 'y_start_loss', 'y_end_loss',
'y_start_categorical_accuracy', 'y_end_categorical_accuracy'])
###Output
_____no_output_____
###Markdown
Tokenizer
###Code
tokenizer = BertWordPieceTokenizer(tokenizer_path , lowercase=True)
tokenizer.save('./')
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
train_preds = model.predict(list(x_train))
valid_preds = model.predict(list(x_valid))
train['start'] = train_preds[0].argmax(axis=-1)
train['end'] = train_preds[1].argmax(axis=-1)
train["end"].clip(0, train["text_len"], inplace=True)
train["start"].clip(0, train["end"], inplace=True)
train['prediction'] = train.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
train["prediction"].fillna('', inplace=True)
validation['start'] = valid_preds[0].argmax(axis=-1)
validation['end'] = valid_preds[1].argmax(axis=-1)
validation["end"].clip(0, validation["text_len"], inplace=True)
validation["start"].clip(0, validation["end"], inplace=True)
validation['prediction'] = validation.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
validation["prediction"].fillna('', inplace=True)
display(evaluate_model(train, validation))
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
print('Train set')
display(train.head(10))
print('Validation set')
display(validation.head(10))
###Output
Train set
|
tutorials/W3D2_DynamicNetworks/W3D2_Tutorial1.ipynb | ###Markdown
Neuromatch Academy: Week 3, Day 2, Tutorial 1 Neuronal Network Dynamics: Neural Rate Models__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom --- Tutorial ObjectivesThe brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks. The activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.In this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons. **Steps:**- Write the equation for the firing rate dynamics of a 1D excitatory population.- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. - Investigate the stability of the fixed points by linearizing the dynamics around them. --- Setup
###Code
# Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt # root-finding algorithm
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("/share/dataset/COMMON/nma.mplstyle.txt")
# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6, 4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
def plot_dr_r(r, drdt, x_fps=None):
plt.figure()
plt.plot(r, drdt, 'k')
plt.plot(r, 0. * r, 'k--')
if x_fps is not None:
plt.plot(x_fps, np.zeros_like(x_fps), "ko", ms=12)
plt.xlabel(r'$r$')
plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x, dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('dF(x)', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Neuronal network dynamics
###Code
# @title Video 1: Dynamic networks
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1dh411o7qJ', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1dh411o7qJ
###Markdown
Section 1.1: Dynamics of a single excitatory populationIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:\begin{align}\tau \frac{dr}{dt} &= -r + F(w\cdot r + I_{\text{ext}}) \quad\qquad (1)\end{align}$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\text{ext}}$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.To start building the model, please execute the cell below to initialize the simulation parameters.
###Code
# @markdown *Execute this cell to set default parameters for a single excitatory population model*
def default_pars_single(**kwargs):
pars = {}
# Excitatory parameters
pars['tau'] = 1. # Timescale of the E population [ms]
pars['a'] = 1.2 # Gain of the E population
pars['theta'] = 2.8 # Threshold of the E population
# Connection strength
pars['w'] = 0. # E to E, we first set it to 0
# External input
pars['I_ext'] = 0.
# simulation parameters
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['r_init'] = 0.2 # Initial value of E
# External parameters if any
pars.update(kwargs)
# Vector of discretized time points [ms]
pars['range_t'] = np.arange(0, pars['T'], pars['dt'])
return pars
###Output
_____no_output_____
###Markdown
You can now use:- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step- To update an existing parameter dictionary, use `pars['New_para'] = value`Because `pars` is a dictionary, it can be passed to a function that requires individual parameters as arguments using `my_func(**pars)` syntax. Section 1.2: F-I curvesIn electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.The transfer function $F(\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$. Exercise 1: Implement F-I curve Let's first investigate the activation functions before simulating the dynamics of the entire population. In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
###Code
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################
## TODO for students: compute f = F(x) ##
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the f-I function")
#################################################
# Define the sigmoidal transfer function f = F(x)
f = ...
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# f = F(x, pars['a'], pars['theta'])
# plot_fI(x, f)
# to_remove solution
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
# Define the sigmoidal transfer function f = F(x)
f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
f = F(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_fI(x, f)
###Output
_____no_output_____
###Markdown
Interactive Demo: Parameter exploration of F-I curveHere's an interactive demo that shows how the F-I curve changes for different values of the gain and threshold parameters. How do the gain and threshold parameters affect the F-I curve?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def interactive_plot_FI(a, theta):
"""
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
"""
# set the range of input
x = np.arange(0, 10, .1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
_ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))
# to_remove explanation
"""
Discussion:
For the function we have chosen to model the F-I curve (eq 2),
- a determines the slope (gain) of the rising phase of the F-I curve
- theta determines the input at which the function F(x) reaches its mid-value (0.5).
That is, theta shifts the F-I curve along the horizontal axis.
For our neurons we are using in this tutorial:
- a controls the gain of the neuron population
- theta controls the threshold at which the neuron population starts to respond
""";
###Output
_____no_output_____
###Markdown
Section 1.3: Simulation scheme of E dynamicsBecause $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:\begin{align}&\frac{dr}{dt} \approx \frac{r[k+1]-r[k]}{\Delta t} \end{align}where $r[k] = r(k\Delta t)$. Thus,$$\Delta r[k] = \frac{\Delta t}{\tau}[-r[k] + F(w\cdot r[k] + I_{\text{ext}}(k;a,\theta))]$$Hence, Equation (1) is updated at each time step by:$$r[k+1] = r[k] + \Delta r[k]$$
###Code
# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`*
def simulate_single(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
"""
# Set parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
r_init = pars['r_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
r = np.zeros(Lt)
r[0] = r_init
I_ext = I_ext * np.ones(Lt)
# Update the E activity
for k in range(Lt - 1):
dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta))
r[k+1] = r[k] + dr
return r
help(simulate_single)
###Output
Help on function simulate_single in module __main__:
simulate_single(pars)
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
###Markdown
Interactive Demo: Parameter Exploration of single population dynamicsNote that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\text{ext}}$. Explore these dynamics in this interactive demo.How does $r_{\text{sim}}(t)$ change with different $I_{\text{ext}}$ values? How does it change with different $\tau$ values? Investigate the relationship between $F(I_{\text{ext}}; a, \theta)$ and the steady value of $r(t)$. Note that, $r_{\rm ana}(t)$ denotes the analytical solution - you will learn how this is computed in the next section.
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
# get default parameters
pars = default_pars_single(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau):
# set external input and time constant
pars['I_ext'] = I_ext
pars['tau'] = tau
# simulation
r = simulate_single(pars)
# Analytical Solution
r_ana = (pars['r_init']
+ (F(I_ext, pars['a'], pars['theta'])
- pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))
# plot
plt.figure()
plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5,
zorder=1)
plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),
label=r'$r_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'],
F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),
'k--', label=r'$F(I_{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('Activity r(t)', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),
tau=(1., 5., 0.2))
# to_remove explanation
"""
Discussion:
Given the choice of F-I curve (eq 2) and dynamics of the neuron population (eq. 1)
the neurons have two fixed points or steady-state responses irrespective of the input.
- Weak inputs to the neurons eventually result in the activity converging to zero
- Strong inputs to the neurons eventually result in the activity converging to max value
The time constant tau, does not affect the steady-state response but it determines
the time the neurons take to reach to their fixed point.
""";
###Output
_____no_output_____
###Markdown
Think!Above, we have numerically solved a system driven by a positive input and that, if $w_{EE} \neq 0$, receives an excitatory recurrent input (**extra challenge: try changing the value of $w_{EE}$ to a positive number and plotting the results of simulate_single**). Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that $r_E$(t) stays finite? - Which parameter would you change in order to increase the maximum value of the response?
###Code
# to_remove explanation
"""
Discussion:
1) As the F-I curve is bounded between zero and one, the system doesn't explode.
The f-curve guarantees this property
2) One way to increase the maximum response is to change the f-I curve. For
example, the ReLU is an unbounded function, and thus will increase the overall maximal
response of the network.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Fixed points of the single population system
###Code
# @title Video 2: Fixed point
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1v54y1v7Gr', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1v54y1v7Gr
###Markdown
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\displaystyle \frac{dr}{dt}=0$. We can find that the steady state of the Equation. (1) by setting $\displaystyle{\frac{dr}{dt}=0}$ and solve for $r$:$$-r_{\text{steady}} + F(w\cdot r_{\text{steady}} + I_{\text{ext}};a,\theta) = 0, \qquad (3)$$When it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system in Equation (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.From the Interactive Demo, one could also notice that the value of $\tau$ influences how quickly the activity will converge to the steady state from its initial value. In the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\tau$ in determining the convergence to the fixed point: $$\displaystyle{r(t) = \big{[}F(I_{\text{ext}};a,\theta) -r(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau}})} + r(t=0)$$ \\We can now numerically calculate the fixed point with a root finding algorithm. Exercise 2: Visualization of the fixed pointsWhen it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\displaystyle{\frac{dr}{dt}}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points. Here, let us, for example, set $w=5.0$ and $I^{\text{ext}}=0.5$. From Equation (1), you can obtain$$\frac{dr}{dt} = [-r + F(w\cdot r + I^{\text{ext}})]\,/\,\tau $$Then, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points.
###Code
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
#########################################################################
# TODO compute drdt and disable the error
raise NotImplementedError("Finish the compute_drdt function")
#########################################################################
# Calculate drdt
drdt = ...
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
# Uncomment to test your function
# drdt = compute_drdt(r, **pars)
# plot_dr_r(r, drdt)
# to_remove solution
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
# Calculate drdt
drdt = -r + F(w * r + I_ext, a, theta) / tau
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
with plt.xkcd():
plot_dr_r(r, drdt)
###Output
_____no_output_____
###Markdown
Exercise 3: Fixed point calculationWe will now find the fixed points numerically. To do so, we need to specif initial values ($r_{\text{guess}}$) for the root-finding algorithm to start from. From the line $\displaystyle{\frac{dr}{dt}}$ plotted above in Exercise 2, initial values can be chosen as a set of values close to where the line crosses zero on the y axis (real fixed point).The next cell defines three helper functions that we will use:- `my_fp_single(r_guess, **pars)` uses a root-finding algorithm to locate a fixed point near a given initial value- `check_fp_single(x_fp, **pars)`, verifies that the values of $r_{\rm fp}$ for which $\displaystyle{\frac{dr}{dt}} = 0$ are the true fixed points- `my_fp_finder(r_guess_vector, **pars)` accepts an array of initial values and finds the same number of fixed points, using the above two functions
###Code
# @markdown *Execute this cell to enable the fixed point functions*
def my_fp_single(r_guess, a, theta, w, I_ext, **other_pars):
"""
Calculate the fixed point through drE/dt=0
Args:
r_guess : Initial value used for scipy.optimize function
a, theta, w, I_ext : simulation parameters
Returns:
x_fp : value of fixed point
"""
# define the right hand of E dynamics
def my_WCr(x):
r = x
drdt = (-r + F(w * r + I_ext, a, theta))
y = np.array(drdt)
return y
x0 = np.array(r_guess)
x_fp = opt.root(my_WCr, x0).x.item()
return x_fp
def check_fp_single(x_fp, a, theta, w, I_ext, mytol=1e-4, **other_pars):
"""
Verify |dr/dt| < mytol
Args:
fp : value of fixed point
a, theta, w, I_ext: simulation parameters
mytol : tolerance, default as 10^{-4}
Returns :
Whether it is a correct fixed point: True/False
"""
# calculate Equation(3)
y = x_fp - F(w * x_fp + I_ext, a, theta)
# Here we set tolerance as 10^{-4}
return np.abs(y) < mytol
def my_fp_finder(pars, r_guess_vector, mytol=1e-4):
"""
Calculate the fixed point(s) through drE/dt=0
Args:
pars : Parameter dictionary
r_guess_vector : Initial values used for scipy.optimize function
mytol : tolerance for checking fixed point, default as 10^{-4}
Returns:
x_fps : values of fixed points
"""
x_fps = []
correct_fps = []
for r_guess in r_guess_vector:
x_fp = my_fp_single(r_guess, **pars)
if check_fp_single(x_fp, **pars, mytol=mytol):
x_fps.append(x_fp)
return x_fps
help(my_fp_finder)
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
#############################################################################
# TODO for students:
# Define initial values close to the intersections of drdt and y=0
# (How many initial values? Hint: How many times do the two lines intersect?)
# Calculate the fixed point with these initial values and plot them
#############################################################################
r_guess_vector = [...]
# Uncomment to test your values
# x_fps = my_fp_finder(pars, r_guess_vector)
# plot_dr_r(r, drdt, x_fps)
# to_remove solution
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
r_guess_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_guess_vector)
with plt.xkcd():
plot_dr_r(r, drdt, x_fps)
###Output
_____no_output_____
###Markdown
Interactive Demo: fixed points as a function of recurrent and external inputs.You can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\text{ext}}$ take different values. How does the number of fixed points change?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def plot_intersection_single(w, I_ext):
# set your parameters
pars = default_pars_single(w=w, I_ext=I_ext)
# find fixed points
r_init_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_init_vector)
# plot
r = np.linspace(0, 1., 1000)
drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau']
plot_dr_r(r, drdt, x_fps)
_ = widgets.interact(plot_intersection_single, w=(1, 7, 0.2),
I_ext=(0, 3, 0.1))
# to_remove explanation
"""
Discussion:
The fixed points of the single excitatory neuron population are determined by both
recurrent connections w and external input I_ext. In a previous interactive demo
we saw how the system showed two different steady-states when w = 0. But when w
doe not equal 0, for some range of w the system shows three fixed points (the middle
one being unstable) and the steady state depends on the initial conditions (i.e.
r at time zero.).
More on this will be explained in the next section.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this tutorial, we have investigated the dynamics of a rate-based single population of neurons.We learned about:- The effect of the input parameters and the time constant of the network on the dynamics of the population.- How to find the fixed point(s) of the system.Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:- How to determine the stability of a fixed point by linearizing the system.- How to add realistic inputs to our model. --- Bonus 1: Stability of a fixed point
###Code
# @title Video 3: Stability of fixed points
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1oA411e7eg', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1oA411e7eg
###Markdown
Initial values and trajectoriesHere, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
###Code
# @markdown Execute this cell to see the trajectories!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
plt.figure(figsize=(8, 5))
for ie in range(10):
pars['r_init'] = 0.1 * ie # set the initial value
r = simulate_single(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie,
label=r'r$_{\mathrm{init}}$=%.1f' % (0.1 * ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel(r'$r$(t)')
plt.legend(loc=[1.01, -0.06], fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Interactive Demo: dynamics as a function of the initial valueLet's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
pars = default_pars_single(w=5.0, I_ext=0.5)
def plot_single_diffEinit(r_init):
pars['r_init'] = r_init
r = simulate_single(pars)
plt.figure()
plt.plot(pars['range_t'], r, 'b', zorder=1)
plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)
plt.xlabel('t (ms)', fontsize=16)
plt.ylabel(r'$r(t)$', fontsize=16)
plt.ylim(0, 1.0)
plt.show()
_ = widgets.interact(plot_single_diffEinit, r_init=(0, 1, 0.02))
# to_remove explanation
"""
Discussion:
To better appreciate what is happening here, you should go back to the previous
interactive demo. Set the w = 5 and I_ext = 0.5.
You will find that there are three fixed points of the system for these values of
w and I_ext. Now, choose the initial value in this demo and see in which direction
the system output moves. When r_init is in the vicinity of the leftmost fixed points
it moves towards the left most fixed point. When r_init is in the vicinity of the
rightmost fixed points it moves towards the rightmost fixed point.
""";
###Output
_____no_output_____
###Markdown
Stability analysis via linearization of the dynamicsJust like Equation $1$ in the case ($w=0$) discussed above, a generic linear system $$\frac{dx}{dt} = \lambda (x - b),$$ has a fixed point for $x=b$. The analytical solution of such a system can be found to be:$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$ Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as: $$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, "**unstable**" . Compute the stability of Equation $1$Similar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\epsilon$, i.e. $r = r^{*} + \epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:\begin{align}\tau \frac{d\epsilon}{dt} \approx -\epsilon + w F'(w\cdot r^{*} + I_{\text{ext}};a,\theta) \epsilon \end{align}where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:\begin{align}\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau }[-1 + w F'(w\cdot r^* + I_{\text{ext}};a,\theta)] \end{align}That is, as in the linear system above, the value of$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau \qquad (4)$$determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system. Exercise 4: Compute $dF$The derivative of the sigmoid transfer function is:\begin{align} \frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \qquad (5)\end{align}Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
###Code
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
###########################################################################
# TODO for students: compute dFdx ##
raise NotImplementedError("Student excercise: compute the deravitive of F")
###########################################################################
# Calculate the population activation
dFdx = ...
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# df = dF(x, pars['a'], pars['theta'])
# plot_dFdt(x, df)
# to_remove solution
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
# Calculate the population activation
dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
df = dF(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_dFdt(x, df)
###Output
_____no_output_____
###Markdown
Exercise 5: Compute eigenvaluesAs discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?Note that the expression of the eigenvalue at fixed point $r^*$$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau$$
###Code
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
#####################################################################
## TODO for students: compute eigenvalue and disable the error
raise NotImplementedError("Student excercise: compute the eigenvalue")
######################################################################
# Compute the eigenvalue
eig = ...
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
# Uncomment below lines after completing the eig_single function.
# for fp in x_fp:
# eig_fp = eig_single(fp, **pars)
# print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
###Output
_____no_output_____
###Markdown
**SAMPLE OUTPUT**```Fixed point1 at 0.042 with Eigenvalue=-0.583Fixed point2 at 0.447 with Eigenvalue=0.498Fixed point3 at 0.900 with Eigenvalue=-0.626```
###Code
# to_remove solution
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
# Compute the eigenvalue
eig = (-1. + w * dF(w * fp + I_ext, a, theta)) / tau
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
for fp in x_fp:
eig_fp = eig_single(fp, **pars)
print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
###Output
Fixed point1 at 0.042 with Eigenvalue=-0.583
Fixed point1 at 0.447 with Eigenvalue=0.498
Fixed point1 at 0.900 with Eigenvalue=-0.626
###Markdown
Think! Throughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$?
###Code
# to_remove explanation
"""
Discussion:
You can check this by going back the second last interactive demo and set the
weight to w<0. You will notice that the system has only one fixed point and that
is at zero value. For this particular dynamics, the system will eventually converge
to zero. But try it out.
""";
###Output
_____no_output_____
###Markdown
--- Bonus 2: Noisy input drives the transition between two stable states Ornstein-Uhlenbeck (OU) processAs discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows: $$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
###Code
# @title OU process `my_OU(pars, sig, myseed=False)`
# @markdown Make sure you execute this cell to visualize the noise!
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt - 1):
I_ou[it + 1] = (I_ou[it]
+ dt / tau_ou * (0. - I_ou[it])
+ np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1])
return I_ou
pars = default_pars_single(T=100)
pars['tau_ou'] = 1. # [ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=2020)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'r')
plt.xlabel('t (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$')
plt.show()
###Output
_____no_output_____
###Markdown
Example: Up-Down transitionIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
###Code
# @title Simulation of an E population with OU inputs
# @markdown Make sure you execute this cell to spot the Up-Down states!
pars = default_pars_single(T=1000)
pars['w'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. # [ms]
pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
r = simulate_single(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], r, 'b', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel(r'$r(t)$')
plt.show()
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 3, Day 2, Tutorial 1 Neuronal Network Dynamics: Neural Rate Models__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom --- Tutorial ObjectivesThe brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks. The activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.In this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons. **Steps:**- Write the equation for the firing rate dynamics of a 1D excitatory population.- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. - Investigate the stability of the fixed points by linearizing the dynamics around them. --- Setup
###Code
# Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt # root-finding algorithm
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6, 4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
def plot_dr_r(r, drdt, x_fps=None):
plt.figure()
plt.plot(r, drdt, 'k')
plt.plot(r, 0. * r, 'k--')
if x_fps is not None:
plt.plot(x_fps, np.zeros_like(x_fps), "ko", ms=12)
plt.xlabel(r'$r$')
plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x, dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('dF(x)', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Neuronal network dynamics
###Code
# @title Video 1: Dynamic networks
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1dh411o7qJ', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1dh411o7qJ
###Markdown
Section 1.1: Dynamics of a single excitatory populationIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:\begin{align}\tau \frac{dr}{dt} &= -r + F(w\cdot r + I_{\text{ext}}) \quad\qquad (1)\end{align}$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\text{ext}}$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.To start building the model, please execute the cell below to initialize the simulation parameters.
###Code
# @markdown *Execute this cell to set default parameters for a single excitatory population model*
def default_pars_single(**kwargs):
pars = {}
# Excitatory parameters
pars['tau'] = 1. # Timescale of the E population [ms]
pars['a'] = 1.2 # Gain of the E population
pars['theta'] = 2.8 # Threshold of the E population
# Connection strength
pars['w'] = 0. # E to E, we first set it to 0
# External input
pars['I_ext'] = 0.
# simulation parameters
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['r_init'] = 0.2 # Initial value of E
# External parameters if any
pars.update(kwargs)
# Vector of discretized time points [ms]
pars['range_t'] = np.arange(0, pars['T'], pars['dt'])
return pars
###Output
_____no_output_____
###Markdown
You can now use:- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step- To update an existing parameter dictionary, use `pars['New_para'] = value`Because `pars` is a dictionary, it can be passed to a function that requires individual parameters as arguments using `my_func(**pars)` syntax. Section 1.2: F-I curvesIn electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.The transfer function $F(\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$. Exercise 1: Implement F-I curve Let's first investigate the activation functions before simulating the dynamics of the entire population. In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
###Code
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################
## TODO for students: compute f = F(x) ##
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the f-I function")
#################################################
# Define the sigmoidal transfer function f = F(x)
f = ...
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# f = F(x, pars['a'], pars['theta'])
# plot_fI(x, f)
# to_remove solution
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
# Define the sigmoidal transfer function f = F(x)
f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
f = F(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_fI(x, f)
###Output
_____no_output_____
###Markdown
Interactive Demo: Parameter exploration of F-I curveHere's an interactive demo that shows how the F-I curve changes for different values of the gain and threshold parameters. How do the gain and threshold parameters affect the F-I curve?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def interactive_plot_FI(a, theta):
"""
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
"""
# set the range of input
x = np.arange(0, 10, .1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
_ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))
# to_remove explanation
"""
Discussion:
For the function we have chosen to model the F-I curve (eq 2),
- a determines the slope (gain) of the rising phase of the F-I curve
- theta determines the input at which the function F(x) reaches its mid-value (0.5).
That is, theta shifts the F-I curve along the horizontal axis.
For our neurons we are using in this tutorial:
- a controls the gain of the neuron population
- theta controls the threshold at which the neuron population starts to respond
""";
###Output
_____no_output_____
###Markdown
Section 1.3: Simulation scheme of E dynamicsBecause $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:\begin{align}&\frac{dr}{dt} \approx \frac{r[k+1]-r[k]}{\Delta t} \end{align}where $r[k] = r(k\Delta t)$. Thus,$$\Delta r[k] = \frac{\Delta t}{\tau}[-r[k] + F(w\cdot r[k] + I_{\text{ext}}(k;a,\theta))]$$Hence, Equation (1) is updated at each time step by:$$r[k+1] = r[k] + \Delta r[k]$$
###Code
# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`*
def simulate_single(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
"""
# Set parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
r_init = pars['r_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
r = np.zeros(Lt)
r[0] = r_init
I_ext = I_ext * np.ones(Lt)
# Update the E activity
for k in range(Lt - 1):
dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta))
r[k+1] = r[k] + dr
return r
help(simulate_single)
###Output
Help on function simulate_single in module __main__:
simulate_single(pars)
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
###Markdown
Interactive Demo: Parameter Exploration of single population dynamicsNote that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\text{ext}}$. Explore these dynamics in this interactive demo.How does $r_{\text{sim}}(t)$ change with different $I_{\text{ext}}$ values? How does it change with different $\tau$ values? Investigate the relationship between $F(I_{\text{ext}}; a, \theta)$ and the steady value of $r(t)$. Note that, $r_{\rm ana}(t)$ denotes the analytical solution - you will learn how this is computed in the next section.
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
# get default parameters
pars = default_pars_single(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau):
# set external input and time constant
pars['I_ext'] = I_ext
pars['tau'] = tau
# simulation
r = simulate_single(pars)
# Analytical Solution
r_ana = (pars['r_init']
+ (F(I_ext, pars['a'], pars['theta'])
- pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))
# plot
plt.figure()
plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5,
zorder=1)
plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),
label=r'$r_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'],
F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),
'k--', label=r'$F(I_{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('Activity r(t)', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),
tau=(1., 5., 0.2))
# to_remove explanation
"""
Discussion:
Given the choice of F-I curve (eq 2) and dynamics of the neuron population (eq. 1)
the neurons have two fixed points or steady-state responses irrespective of the input.
- Weak inputs to the neurons eventually result in the activity converging to zero
- Strong inputs to the neurons eventually result in the activity converging to max value
The time constant tau, does not affect the steady-state response but it determines
the time the neurons take to reach to their fixed point.
""";
###Output
_____no_output_____
###Markdown
Think!Above, we have numerically solved a system driven by a positive input and that, if $w_{EE} \neq 0$, receives an excitatory recurrent input (**extra challenge: try changing the value of $w_{EE}$ to a positive number and plotting the results of simulate_single**). Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that $r_E$(t) stays finite? - Which parameter would you change in order to increase the maximum value of the response?
###Code
# to_remove explanation
"""
Discussion:
1) As the F-I curve is bounded between zero and one, the system doesn't explode.
The f-curve guarantees this property
2) One way to increase the maximum response is to change the f-I curve. For
example, the ReLU is an unbounded function, and thus will increase the overall maximal
response of the network.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Fixed points of the single population system
###Code
# @title Video 2: Fixed point
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1v54y1v7Gr', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1v54y1v7Gr
###Markdown
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\displaystyle \frac{dr}{dt}=0$. We can find that the steady state of the Equation. (1) by setting $\displaystyle{\frac{dr}{dt}=0}$ and solve for $r$:$$-r_{\text{steady}} + F(w\cdot r_{\text{steady}} + I_{\text{ext}};a,\theta) = 0, \qquad (3)$$When it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system in Equation (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.From the Interactive Demo, one could also notice that the value of $\tau$ influences how quickly the activity will converge to the steady state from its initial value. In the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\tau$ in determining the convergence to the fixed point: $$\displaystyle{r(t) = \big{[}F(I_{\text{ext}};a,\theta) -r(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau}})} + r(t=0)$$ \\We can now numerically calculate the fixed point with a root finding algorithm. Exercise 2: Visualization of the fixed pointsWhen it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\displaystyle{\frac{dr}{dt}}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points. Here, let us, for example, set $w=5.0$ and $I^{\text{ext}}=0.5$. From Equation (1), you can obtain$$\frac{dr}{dt} = [-r + F(w\cdot r + I^{\text{ext}})]\,/\,\tau $$Then, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points.
###Code
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
#########################################################################
# TODO compute drdt and disable the error
raise NotImplementedError("Finish the compute_drdt function")
#########################################################################
# Calculate drdt
drdt = ...
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
# Uncomment to test your function
# drdt = compute_drdt(r, **pars)
# plot_dr_r(r, drdt)
# to_remove solution
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
# Calculate drdt
drdt = (-r + F(w * r + I_ext, a, theta)) / tau
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
with plt.xkcd():
plot_dr_r(r, drdt)
###Output
_____no_output_____
###Markdown
Exercise 3: Fixed point calculationWe will now find the fixed points numerically. To do so, we need to specif initial values ($r_{\text{guess}}$) for the root-finding algorithm to start from. From the line $\displaystyle{\frac{dr}{dt}}$ plotted above in Exercise 2, initial values can be chosen as a set of values close to where the line crosses zero on the y axis (real fixed point).The next cell defines three helper functions that we will use:- `my_fp_single(r_guess, **pars)` uses a root-finding algorithm to locate a fixed point near a given initial value- `check_fp_single(x_fp, **pars)`, verifies that the values of $r_{\rm fp}$ for which $\displaystyle{\frac{dr}{dt}} = 0$ are the true fixed points- `my_fp_finder(r_guess_vector, **pars)` accepts an array of initial values and finds the same number of fixed points, using the above two functions
###Code
# @markdown *Execute this cell to enable the fixed point functions*
def my_fp_single(r_guess, a, theta, w, I_ext, **other_pars):
"""
Calculate the fixed point through drE/dt=0
Args:
r_guess : Initial value used for scipy.optimize function
a, theta, w, I_ext : simulation parameters
Returns:
x_fp : value of fixed point
"""
# define the right hand of E dynamics
def my_WCr(x):
r = x
drdt = (-r + F(w * r + I_ext, a, theta))
y = np.array(drdt)
return y
x0 = np.array(r_guess)
x_fp = opt.root(my_WCr, x0).x.item()
return x_fp
def check_fp_single(x_fp, a, theta, w, I_ext, mytol=1e-4, **other_pars):
"""
Verify |dr/dt| < mytol
Args:
fp : value of fixed point
a, theta, w, I_ext: simulation parameters
mytol : tolerance, default as 10^{-4}
Returns :
Whether it is a correct fixed point: True/False
"""
# calculate Equation(3)
y = x_fp - F(w * x_fp + I_ext, a, theta)
# Here we set tolerance as 10^{-4}
return np.abs(y) < mytol
def my_fp_finder(pars, r_guess_vector, mytol=1e-4):
"""
Calculate the fixed point(s) through drE/dt=0
Args:
pars : Parameter dictionary
r_guess_vector : Initial values used for scipy.optimize function
mytol : tolerance for checking fixed point, default as 10^{-4}
Returns:
x_fps : values of fixed points
"""
x_fps = []
correct_fps = []
for r_guess in r_guess_vector:
x_fp = my_fp_single(r_guess, **pars)
if check_fp_single(x_fp, **pars, mytol=mytol):
x_fps.append(x_fp)
return x_fps
help(my_fp_finder)
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
#############################################################################
# TODO for students:
# Define initial values close to the intersections of drdt and y=0
# (How many initial values? Hint: How many times do the two lines intersect?)
# Calculate the fixed point with these initial values and plot them
#############################################################################
r_guess_vector = [...]
# Uncomment to test your values
# x_fps = my_fp_finder(pars, r_guess_vector)
# plot_dr_r(r, drdt, x_fps)
# to_remove solution
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
r_guess_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_guess_vector)
with plt.xkcd():
plot_dr_r(r, drdt, x_fps)
###Output
_____no_output_____
###Markdown
Interactive Demo: fixed points as a function of recurrent and external inputs.You can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\text{ext}}$ take different values. How does the number of fixed points change?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def plot_intersection_single(w, I_ext):
# set your parameters
pars = default_pars_single(w=w, I_ext=I_ext)
# find fixed points
r_init_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_init_vector)
# plot
r = np.linspace(0, 1., 1000)
drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau']
plot_dr_r(r, drdt, x_fps)
_ = widgets.interact(plot_intersection_single, w=(1, 7, 0.2),
I_ext=(0, 3, 0.1))
# to_remove explanation
"""
Discussion:
The fixed points of the single excitatory neuron population are determined by both
recurrent connections w and external input I_ext. In a previous interactive demo
we saw how the system showed two different steady-states when w = 0. But when w
doe not equal 0, for some range of w the system shows three fixed points (the middle
one being unstable) and the steady state depends on the initial conditions (i.e.
r at time zero.).
More on this will be explained in the next section.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this tutorial, we have investigated the dynamics of a rate-based single population of neurons.We learned about:- The effect of the input parameters and the time constant of the network on the dynamics of the population.- How to find the fixed point(s) of the system.Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:- How to determine the stability of a fixed point by linearizing the system.- How to add realistic inputs to our model. --- Bonus 1: Stability of a fixed point
###Code
# @title Video 3: Stability of fixed points
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1oA411e7eg', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1oA411e7eg
###Markdown
Initial values and trajectoriesHere, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
###Code
# @markdown Execute this cell to see the trajectories!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
plt.figure(figsize=(8, 5))
for ie in range(10):
pars['r_init'] = 0.1 * ie # set the initial value
r = simulate_single(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie,
label=r'r$_{\mathrm{init}}$=%.1f' % (0.1 * ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel(r'$r$(t)')
plt.legend(loc=[1.01, -0.06], fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Interactive Demo: dynamics as a function of the initial valueLet's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
pars = default_pars_single(w=5.0, I_ext=0.5)
def plot_single_diffEinit(r_init):
pars['r_init'] = r_init
r = simulate_single(pars)
plt.figure()
plt.plot(pars['range_t'], r, 'b', zorder=1)
plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)
plt.xlabel('t (ms)', fontsize=16)
plt.ylabel(r'$r(t)$', fontsize=16)
plt.ylim(0, 1.0)
plt.show()
_ = widgets.interact(plot_single_diffEinit, r_init=(0, 1, 0.02))
# to_remove explanation
"""
Discussion:
To better appreciate what is happening here, you should go back to the previous
interactive demo. Set the w = 5 and I_ext = 0.5.
You will find that there are three fixed points of the system for these values of
w and I_ext. Now, choose the initial value in this demo and see in which direction
the system output moves. When r_init is in the vicinity of the leftmost fixed points
it moves towards the left most fixed point. When r_init is in the vicinity of the
rightmost fixed points it moves towards the rightmost fixed point.
""";
###Output
_____no_output_____
###Markdown
Stability analysis via linearization of the dynamicsJust like Equation $1$ in the case ($w=0$) discussed above, a generic linear system $$\frac{dx}{dt} = \lambda (x - b),$$ has a fixed point for $x=b$. The analytical solution of such a system can be found to be:$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$ Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as: $$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, "**unstable**" . Compute the stability of Equation $1$Similar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\epsilon$, i.e. $r = r^{*} + \epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:\begin{align}\tau \frac{d\epsilon}{dt} \approx -\epsilon + w F'(w\cdot r^{*} + I_{\text{ext}};a,\theta) \epsilon \end{align}where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:\begin{align}\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau }[-1 + w F'(w\cdot r^* + I_{\text{ext}};a,\theta)] \end{align}That is, as in the linear system above, the value of$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau \qquad (4)$$determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system. Exercise 4: Compute $dF$The derivative of the sigmoid transfer function is:\begin{align} \frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \qquad (5)\end{align}Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
###Code
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
###########################################################################
# TODO for students: compute dFdx ##
raise NotImplementedError("Student excercise: compute the deravitive of F")
###########################################################################
# Calculate the population activation
dFdx = ...
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# df = dF(x, pars['a'], pars['theta'])
# plot_dFdt(x, df)
# to_remove solution
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
# Calculate the population activation
dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
df = dF(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_dFdt(x, df)
###Output
_____no_output_____
###Markdown
Exercise 5: Compute eigenvaluesAs discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?Note that the expression of the eigenvalue at fixed point $r^*$$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau$$
###Code
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
#####################################################################
## TODO for students: compute eigenvalue and disable the error
raise NotImplementedError("Student excercise: compute the eigenvalue")
######################################################################
# Compute the eigenvalue
eig = ...
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
# Uncomment below lines after completing the eig_single function.
# for fp in x_fp:
# eig_fp = eig_single(fp, **pars)
# print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
###Output
_____no_output_____
###Markdown
**SAMPLE OUTPUT**```Fixed point1 at 0.042 with Eigenvalue=-0.583Fixed point2 at 0.447 with Eigenvalue=0.498Fixed point3 at 0.900 with Eigenvalue=-0.626```
###Code
# to_remove solution
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
# Compute the eigenvalue
eig = (-1. + w * dF(w * fp + I_ext, a, theta)) / tau
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
for fp in x_fp:
eig_fp = eig_single(fp, **pars)
print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
###Output
Fixed point1 at 0.042 with Eigenvalue=-0.583
Fixed point1 at 0.447 with Eigenvalue=0.498
Fixed point1 at 0.900 with Eigenvalue=-0.626
###Markdown
Think! Throughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$?
###Code
# to_remove explanation
"""
Discussion:
You can check this by going back the second last interactive demo and set the
weight to w<0. You will notice that the system has only one fixed point and that
is at zero value. For this particular dynamics, the system will eventually converge
to zero. But try it out.
""";
###Output
_____no_output_____
###Markdown
--- Bonus 2: Noisy input drives the transition between two stable states Ornstein-Uhlenbeck (OU) processAs discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows: $$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
###Code
# @title OU process `my_OU(pars, sig, myseed=False)`
# @markdown Make sure you execute this cell to visualize the noise!
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt - 1):
I_ou[it + 1] = (I_ou[it]
+ dt / tau_ou * (0. - I_ou[it])
+ np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1])
return I_ou
pars = default_pars_single(T=100)
pars['tau_ou'] = 1. # [ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=2020)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'r')
plt.xlabel('t (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$')
plt.show()
###Output
_____no_output_____
###Markdown
Example: Up-Down transitionIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
###Code
# @title Simulation of an E population with OU inputs
# @markdown Make sure you execute this cell to spot the Up-Down states!
pars = default_pars_single(T=1000)
pars['w'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. # [ms]
pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
r = simulate_single(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], r, 'b', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel(r'$r(t)$')
plt.show()
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 3, Day 2, Tutorial 1 Neuronal Network Dynamics: Neural Rate Models__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom --- Tutorial ObjectivesThe brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks. The activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.In this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons. **Steps:**- Write the equation for the firing rate dynamics of a 1D excitatory population.- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. - Investigate the stability of the fixed points by linearizing the dynamics around them. --- Setup
###Code
# Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt # root-finding algorithm
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6, 4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
def plot_dr_r(r, drdt, x_fps=None):
plt.figure()
plt.plot(r, drdt, 'k')
plt.plot(r, 0. * r, 'k--')
if x_fps is not None:
plt.plot(x_fps, np.zeros_like(x_fps), "ko", ms=12)
plt.xlabel(r'$r$')
plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x, dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('dF(x)', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Neuronal network dynamics
###Code
# @title Video 1: Dynamic networks
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="p848349hPyw", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Section 1.1: Dynamics of a single excitatory populationIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:\begin{align}\tau \frac{dr}{dt} &= -r + F(w\cdot r + I_{\text{ext}}) \quad\qquad (1)\end{align}$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\text{ext}}$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.To start building the model, please execute the cell below to initialize the simulation parameters.
###Code
# @markdown *Execute this cell to set default parameters for a single excitatory population model*
def default_pars_single(**kwargs):
pars = {}
# Excitatory parameters
pars['tau'] = 1. # Timescale of the E population [ms]
pars['a'] = 1.2 # Gain of the E population
pars['theta'] = 2.8 # Threshold of the E population
# Connection strength
pars['w'] = 0. # E to E, we first set it to 0
# External input
pars['I_ext'] = 0.
# simulation parameters
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['r_init'] = 0.2 # Initial value of E
# External parameters if any
pars.update(kwargs)
# Vector of discretized time points [ms]
pars['range_t'] = np.arange(0, pars['T'], pars['dt'])
return pars
###Output
_____no_output_____
###Markdown
You can now use:- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step- To update an existing parameter dictionary, use `pars['New_para'] = value`Because `pars` is a dictionary, it can be passed to a function that requires individual parameters as arguments using `my_func(**pars)` syntax. Section 1.2: F-I curvesIn electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.The transfer function $F(\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$. Exercise 1: Implement F-I curve Let's first investigate the activation functions before simulating the dynamics of the entire population. In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
###Code
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################
## TODO for students: compute f = F(x) ##
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the f-I function")
#################################################
# Define the sigmoidal transfer function f = F(x)
f = ...
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# f = F(x, pars['a'], pars['theta'])
# plot_fI(x, f)
# to_remove solution
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
# Define the sigmoidal transfer function f = F(x)
f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
f = F(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_fI(x, f)
###Output
_____no_output_____
###Markdown
Interactive Demo: Parameter exploration of F-I curveHere's an interactive demo that shows how the F-I curve changes for different values of the gain and threshold parameters. How do the gain and threshold parameters affect the F-I curve?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def interactive_plot_FI(a, theta):
"""
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
"""
# set the range of input
x = np.arange(0, 10, .1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
_ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))
# to_remove explanation
"""
Discussion:
For the function we have chosen to model the F-I curve (eq 2),
- a determines the slope (gain) of the rising phase of the F-I curve
- theta determines the input at which the function F(x) reaches its mid-value (0.5).
That is, theta shifts the F-I curve along the horizontal axis.
For our neurons we are using in this tutorial:
- a controls the gain of the neuron population
- theta controls the threshold at which the neuron population starts to respond
""";
###Output
_____no_output_____
###Markdown
Section 1.3: Simulation scheme of E dynamicsBecause $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:\begin{align}&\frac{dr}{dt} \approx \frac{r[k+1]-r[k]}{\Delta t} \end{align}where $r[k] = r(k\Delta t)$. Thus,$$\Delta r[k] = \frac{\Delta t}{\tau}[-r[k] + F(w\cdot r[k] + I_{\text{ext}}(k;a,\theta))]$$Hence, Equation (1) is updated at each time step by:$$r[k+1] = r[k] + \Delta r[k]$$
###Code
# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`*
def simulate_single(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
"""
# Set parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
r_init = pars['r_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
r = np.zeros(Lt)
r[0] = r_init
I_ext = I_ext * np.ones(Lt)
# Update the E activity
for k in range(Lt - 1):
dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta))
r[k+1] = r[k] + dr
return r
help(simulate_single)
###Output
_____no_output_____
###Markdown
Interactive Demo: Parameter Exploration of single population dynamicsNote that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\text{ext}}$. Explore these dynamics in this interactive demo.How does $r_{\text{sim}}(t)$ change with different $I_{\text{ext}}$ values? How does it change with different $\tau$ values? Investigate the relationship between $F(I_{\text{ext}}; a, \theta)$ and the steady value of $r(t)$. Note that, $r_{\rm ana}(t)$ denotes the analytical solution - you will learn how this is computed in the next section.
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
# get default parameters
pars = default_pars_single(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau):
# set external input and time constant
pars['I_ext'] = I_ext
pars['tau'] = tau
# simulation
r = simulate_single(pars)
# Analytical Solution
r_ana = (pars['r_init']
+ (F(I_ext, pars['a'], pars['theta'])
- pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))
# plot
plt.figure()
plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5,
zorder=1)
plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),
label=r'$r_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'],
F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),
'k--', label=r'$F(I_{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('Activity r(t)', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),
tau=(1., 5., 0.2))
# to_remove explanation
"""
Discussion:
Given the choice of F-I curve (eq 2) and dynamics of the neuron population (eq. 1)
the neurons have two fixed points or steady-state responses irrespective of the input.
- Weak inputs to the neurons eventually result in the activity converging to zero
- Strong inputs to the neurons eventually result in the activity converging to max value
The time constant tau, does not affect the steady-state response but it determines
the time the neurons take to reach to their fixed point.
""";
###Output
_____no_output_____
###Markdown
Think!Above, we have numerically solved a system driven by a positive input and that, if $w_{EE} \neq 0$, receives an excitatory recurrent input (**extra challenge: try changing the value of $w_{EE}$ to a positive number and plotting the results of simulate_single**). Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that $r_E$(t) stays finite? - Which parameter would you change in order to increase the maximum value of the response?
###Code
# to_remove explanation
"""
Discussion:
1) As the F-I curve is bounded between zero and one, the system doesn't explode.
The f-curve guarantees this property
2) One way to increase the maximum response is to change the f-I curve. For
example, the ReLU is an unbounded function, and thus will increase the overall maximal
response of the network.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Fixed points of the single population system
###Code
# @title Video 2: Fixed point
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Ox3ELd1UFyo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\displaystyle \frac{dr}{dt}=0$. We can find that the steady state of the Equation. (1) by setting $\displaystyle{\frac{dr}{dt}=0}$ and solve for $r$:$$-r_{\text{steady}} + F(w\cdot r_{\text{steady}} + I_{\text{ext}};a,\theta) = 0, \qquad (3)$$When it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system in Equation (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.From the Interactive Demo, one could also notice that the value of $\tau$ influences how quickly the activity will converge to the steady state from its initial value. In the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\tau$ in determining the convergence to the fixed point: $$\displaystyle{r(t) = \big{[}F(I_{\text{ext}};a,\theta) -r(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau}})} + r(t=0)$$ \\We can now numerically calculate the fixed point with a root finding algorithm. Exercise 2: Visualization of the fixed pointsWhen it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\displaystyle{\frac{dr}{dt}}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points. Here, let us, for example, set $w=5.0$ and $I^{\text{ext}}=0.5$. From Equation (1), you can obtain$$\frac{dr}{dt} = [-r + F(w\cdot r + I^{\text{ext}})]\,/\,\tau $$Then, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points.
###Code
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
#########################################################################
# TODO compute drdt and disable the error
raise NotImplementedError("Finish the compute_drdt function")
#########################################################################
# Calculate drdt
drdt = ...
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
# Uncomment to test your function
# drdt = compute_drdt(r, **pars)
# plot_dr_r(r, drdt)
# to_remove solution
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
# Calculate drdt
drdt = (-r + F(w * r + I_ext, a, theta)) / tau
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
with plt.xkcd():
plot_dr_r(r, drdt)
###Output
_____no_output_____
###Markdown
Exercise 3: Fixed point calculationWe will now find the fixed points numerically. To do so, we need to specif initial values ($r_{\text{guess}}$) for the root-finding algorithm to start from. From the line $\displaystyle{\frac{dr}{dt}}$ plotted above in Exercise 2, initial values can be chosen as a set of values close to where the line crosses zero on the y axis (real fixed point).The next cell defines three helper functions that we will use:- `my_fp_single(r_guess, **pars)` uses a root-finding algorithm to locate a fixed point near a given initial value- `check_fp_single(x_fp, **pars)`, verifies that the values of $r_{\rm fp}$ for which $\displaystyle{\frac{dr}{dt}} = 0$ are the true fixed points- `my_fp_finder(r_guess_vector, **pars)` accepts an array of initial values and finds the same number of fixed points, using the above two functions
###Code
# @markdown *Execute this cell to enable the fixed point functions*
def my_fp_single(r_guess, a, theta, w, I_ext, **other_pars):
"""
Calculate the fixed point through drE/dt=0
Args:
r_guess : Initial value used for scipy.optimize function
a, theta, w, I_ext : simulation parameters
Returns:
x_fp : value of fixed point
"""
# define the right hand of E dynamics
def my_WCr(x):
r = x
drdt = (-r + F(w * r + I_ext, a, theta))
y = np.array(drdt)
return y
x0 = np.array(r_guess)
x_fp = opt.root(my_WCr, x0).x.item()
return x_fp
def check_fp_single(x_fp, a, theta, w, I_ext, mytol=1e-4, **other_pars):
"""
Verify |dr/dt| < mytol
Args:
fp : value of fixed point
a, theta, w, I_ext: simulation parameters
mytol : tolerance, default as 10^{-4}
Returns :
Whether it is a correct fixed point: True/False
"""
# calculate Equation(3)
y = x_fp - F(w * x_fp + I_ext, a, theta)
# Here we set tolerance as 10^{-4}
return np.abs(y) < mytol
def my_fp_finder(pars, r_guess_vector, mytol=1e-4):
"""
Calculate the fixed point(s) through drE/dt=0
Args:
pars : Parameter dictionary
r_guess_vector : Initial values used for scipy.optimize function
mytol : tolerance for checking fixed point, default as 10^{-4}
Returns:
x_fps : values of fixed points
"""
x_fps = []
correct_fps = []
for r_guess in r_guess_vector:
x_fp = my_fp_single(r_guess, **pars)
if check_fp_single(x_fp, **pars, mytol=mytol):
x_fps.append(x_fp)
return x_fps
help(my_fp_finder)
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
#############################################################################
# TODO for students:
# Define initial values close to the intersections of drdt and y=0
# (How many initial values? Hint: How many times do the two lines intersect?)
# Calculate the fixed point with these initial values and plot them
#############################################################################
r_guess_vector = [...]
# Uncomment to test your values
# x_fps = my_fp_finder(pars, r_guess_vector)
# plot_dr_r(r, drdt, x_fps)
# to_remove solution
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
r_guess_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_guess_vector)
with plt.xkcd():
plot_dr_r(r, drdt, x_fps)
###Output
_____no_output_____
###Markdown
Interactive Demo: fixed points as a function of recurrent and external inputs.You can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\text{ext}}$ take different values. How does the number of fixed points change?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def plot_intersection_single(w, I_ext):
# set your parameters
pars = default_pars_single(w=w, I_ext=I_ext)
# find fixed points
r_init_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_init_vector)
# plot
r = np.linspace(0, 1., 1000)
drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau']
plot_dr_r(r, drdt, x_fps)
_ = widgets.interact(plot_intersection_single, w=(1, 7, 0.2),
I_ext=(0, 3, 0.1))
# to_remove explanation
"""
Discussion:
The fixed points of the single excitatory neuron population are determined by both
recurrent connections w and external input I_ext. In a previous interactive demo
we saw how the system showed two different steady-states when w = 0. But when w
doe not equal 0, for some range of w the system shows three fixed points (the middle
one being unstable) and the steady state depends on the initial conditions (i.e.
r at time zero.).
More on this will be explained in the next section.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this tutorial, we have investigated the dynamics of a rate-based single population of neurons.We learned about:- The effect of the input parameters and the time constant of the network on the dynamics of the population.- How to find the fixed point(s) of the system.Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:- How to determine the stability of a fixed point by linearizing the system.- How to add realistic inputs to our model. --- Bonus 1: Stability of a fixed point
###Code
# @title Video 3: Stability of fixed points
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="KKMlWWU83Jg", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Initial values and trajectoriesHere, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
###Code
# @markdown Execute this cell to see the trajectories!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
plt.figure(figsize=(8, 5))
for ie in range(10):
pars['r_init'] = 0.1 * ie # set the initial value
r = simulate_single(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie,
label=r'r$_{\mathrm{init}}$=%.1f' % (0.1 * ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel(r'$r$(t)')
plt.legend(loc=[1.01, -0.06], fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Interactive Demo: dynamics as a function of the initial valueLet's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
pars = default_pars_single(w=5.0, I_ext=0.5)
def plot_single_diffEinit(r_init):
pars['r_init'] = r_init
r = simulate_single(pars)
plt.figure()
plt.plot(pars['range_t'], r, 'b', zorder=1)
plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)
plt.xlabel('t (ms)', fontsize=16)
plt.ylabel(r'$r(t)$', fontsize=16)
plt.ylim(0, 1.0)
plt.show()
_ = widgets.interact(plot_single_diffEinit, r_init=(0, 1, 0.02))
# to_remove explanation
"""
Discussion:
To better appreciate what is happening here, you should go back to the previous
interactive demo. Set the w = 5 and I_ext = 0.5.
You will find that there are three fixed points of the system for these values of
w and I_ext. Now, choose the initial value in this demo and see in which direction
the system output moves. When r_init is in the vicinity of the leftmost fixed points
it moves towards the left most fixed point. When r_init is in the vicinity of the
rightmost fixed points it moves towards the rightmost fixed point.
""";
###Output
_____no_output_____
###Markdown
Stability analysis via linearization of the dynamicsJust like Equation $1$ in the case ($w=0$) discussed above, a generic linear system $$\frac{dx}{dt} = \lambda (x - b),$$ has a fixed point for $x=b$. The analytical solution of such a system can be found to be:$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$ Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as: $$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, "**unstable**" . Compute the stability of Equation $1$Similar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\epsilon$, i.e. $r = r^{*} + \epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:\begin{align}\tau \frac{d\epsilon}{dt} \approx -\epsilon + w F'(w\cdot r^{*} + I_{\text{ext}};a,\theta) \epsilon \end{align}where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:\begin{align}\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau }[-1 + w F'(w\cdot r^* + I_{\text{ext}};a,\theta)] \end{align}That is, as in the linear system above, the value of$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau \qquad (4)$$determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system. Exercise 4: Compute $dF$The derivative of the sigmoid transfer function is:\begin{align} \frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \qquad (5)\end{align}Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
###Code
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
###########################################################################
# TODO for students: compute dFdx ##
raise NotImplementedError("Student excercise: compute the deravitive of F")
###########################################################################
# Calculate the population activation
dFdx = ...
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# df = dF(x, pars['a'], pars['theta'])
# plot_dFdt(x, df)
# to_remove solution
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
# Calculate the population activation
dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
df = dF(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_dFdt(x, df)
###Output
_____no_output_____
###Markdown
Exercise 5: Compute eigenvaluesAs discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?Note that the expression of the eigenvalue at fixed point $r^*$$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau$$
###Code
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
#####################################################################
## TODO for students: compute eigenvalue and disable the error
raise NotImplementedError("Student excercise: compute the eigenvalue")
######################################################################
# Compute the eigenvalue
eig = ...
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
# Uncomment below lines after completing the eig_single function.
# for fp in x_fp:
# eig_fp = eig_single(fp, **pars)
# print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
###Output
_____no_output_____
###Markdown
**SAMPLE OUTPUT**```Fixed point1 at 0.042 with Eigenvalue=-0.583Fixed point2 at 0.447 with Eigenvalue=0.498Fixed point3 at 0.900 with Eigenvalue=-0.626```
###Code
# to_remove solution
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
# Compute the eigenvalue
eig = (-1. + w * dF(w * fp + I_ext, a, theta)) / tau
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
for fp in x_fp:
eig_fp = eig_single(fp, **pars)
print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
###Output
_____no_output_____
###Markdown
Think! Throughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$?
###Code
# to_remove explanation
"""
Discussion:
You can check this by going back the second last interactive demo and set the
weight to w<0. You will notice that the system has only one fixed point and that
is at zero value. For this particular dynamics, the system will eventually converge
to zero. But try it out.
""";
###Output
_____no_output_____
###Markdown
--- Bonus 2: Noisy input drives the transition between two stable states Ornstein-Uhlenbeck (OU) processAs discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows: $$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
###Code
# @title OU process `my_OU(pars, sig, myseed=False)`
# @markdown Make sure you execute this cell to visualize the noise!
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt - 1):
I_ou[it + 1] = (I_ou[it]
+ dt / tau_ou * (0. - I_ou[it])
+ np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1])
return I_ou
pars = default_pars_single(T=100)
pars['tau_ou'] = 1. # [ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=2020)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'r')
plt.xlabel('t (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$')
plt.show()
###Output
_____no_output_____
###Markdown
Example: Up-Down transitionIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
###Code
# @title Simulation of an E population with OU inputs
# @markdown Make sure you execute this cell to spot the Up-Down states!
pars = default_pars_single(T=1000)
pars['w'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. # [ms]
pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
r = simulate_single(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], r, 'b', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel(r'$r(t)$')
plt.show()
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 3, Day 2, Tutorial 1 Neuronal Network Dynamics: Neural Rate Models__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom --- Tutorial ObjectivesThe brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks. The activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.In this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons. **Steps:**- Write the equation for the firing rate dynamics of a 1D excitatory population.- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. - Investigate the stability of the fixed points by linearizing the dynamics around them. --- Setup
###Code
# Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt # root-finding algorithm
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6, 4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
def plot_dr_r(r, drdt, x_fps=None):
plt.figure()
plt.plot(r, drdt, 'k')
plt.plot(r, 0. * r, 'k--')
if x_fps is not None:
plt.plot(x_fps, np.zeros_like(x_fps), "ko", ms=12)
plt.xlabel(r'$r$')
plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x, dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('dF(x)', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Neuronal network dynamics
###Code
# @title Video 1: Dynamic networks
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="p848349hPyw", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=p848349hPyw
###Markdown
Section 1.1: Dynamics of a single excitatory populationIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:\begin{align}\tau \frac{dr}{dt} &= -r + F(w\cdot r + I_{\text{ext}}) \quad\qquad (1)\end{align}$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\text{ext}}$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.To start building the model, please execute the cell below to initialize the simulation parameters.
###Code
# @markdown *Execute this cell to set default parameters for a single excitatory population model*
def default_pars_single(**kwargs):
pars = {}
# Excitatory parameters
pars['tau'] = 1. # Timescale of the E population [ms]
pars['a'] = 1.2 # Gain of the E population
pars['theta'] = 2.8 # Threshold of the E population
# Connection strength
pars['w'] = 0. # E to E, we first set it to 0
# External input
pars['I_ext'] = 0.
# simulation parameters
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['r_init'] = 0.2 # Initial value of E
# External parameters if any
pars.update(kwargs)
# Vector of discretized time points [ms]
pars['range_t'] = np.arange(0, pars['T'], pars['dt'])
return pars
###Output
_____no_output_____
###Markdown
You can now use:- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step- To update an existing parameter dictionary, use `pars['New_para'] = value`Because `pars` is a dictionary, it can be passed to a function that requires individual parameters as arguments using `my_func(**pars)` syntax. Section 1.2: F-I curvesIn electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.The transfer function $F(\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$. Exercise 1: Implement F-I curve Let's first investigate the activation functions before simulating the dynamics of the entire population. In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
###Code
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################
## TODO for students: compute f = F(x) ##
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the f-I function")
#################################################
# Define the sigmoidal transfer function f = F(x)
f = ...
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# f = F(x, pars['a'], pars['theta'])
# plot_fI(x, f)
# to_remove solution
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
# Define the sigmoidal transfer function f = F(x)
f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
f = F(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_fI(x, f)
###Output
_____no_output_____
###Markdown
Interactive Demo: Parameter exploration of F-I curveHere's an interactive demo that shows how the F-I curve changes for different values of the gain and threshold parameters. How do the gain and threshold parameters affect the F-I curve?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def interactive_plot_FI(a, theta):
"""
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
"""
# set the range of input
x = np.arange(0, 10, .1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
_ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))
# to_remove explanation
"""
Discussion:
For the function we have chosen to model the F-I curve (eq 2),
- a determines the slope (gain) of the rising phase of the F-I curve
- theta determines the input at which the function F(x) reaches its mid-value (0.5).
That is, theta shifts the F-I curve along the horizontal axis.
For our neurons we are using in this tutorial:
- a controls the gain of the neuron population
- theta controls the threshold at which the neuron population starts to respond
""";
###Output
_____no_output_____
###Markdown
Section 1.3: Simulation scheme of E dynamicsBecause $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:\begin{align}&\frac{dr}{dt} \approx \frac{r[k+1]-r[k]}{\Delta t} \end{align}where $r[k] = r(k\Delta t)$. Thus,$$\Delta r[k] = \frac{\Delta t}{\tau}[-r[k] + F(w\cdot r[k] + I_{\text{ext}}(k;a,\theta))]$$Hence, Equation (1) is updated at each time step by:$$r[k+1] = r[k] + \Delta r[k]$$
###Code
# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`*
def simulate_single(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
"""
# Set parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
r_init = pars['r_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
r = np.zeros(Lt)
r[0] = r_init
I_ext = I_ext * np.ones(Lt)
# Update the E activity
for k in range(Lt - 1):
dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta))
r[k+1] = r[k] + dr
return r
help(simulate_single)
###Output
Help on function simulate_single in module __main__:
simulate_single(pars)
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
###Markdown
Interactive Demo: Parameter Exploration of single population dynamicsNote that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\text{ext}}$. Explore these dynamics in this interactive demo.How does $r_{\text{sim}}(t)$ change with different $I_{\text{ext}}$ values? How does it change with different $\tau$ values? Investigate the relationship between $F(I_{\text{ext}}; a, \theta)$ and the steady value of $r(t)$. Note that, $r_{\rm ana}(t)$ denotes the analytical solution - you will learn how this is computed in the next section.
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
# get default parameters
pars = default_pars_single(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau):
# set external input and time constant
pars['I_ext'] = I_ext
pars['tau'] = tau
# simulation
r = simulate_single(pars)
# Analytical Solution
r_ana = (pars['r_init']
+ (F(I_ext, pars['a'], pars['theta'])
- pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))
# plot
plt.figure()
plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5,
zorder=1)
plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),
label=r'$r_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'],
F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),
'k--', label=r'$F(I_{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('Activity r(t)', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),
tau=(1., 5., 0.2))
# to_remove explanation
"""
Discussion:
Given the choice of F-I curve (eq 2) and dynamics of the neuron population (eq. 1)
the neurons have two fixed points or steady-state responses irrespective of the input.
- Weak inputs to the neurons eventually result in the activity converging to zero
- Strong inputs to the neurons eventually result in the activity converging to max value
The time constant tau, does not affect the steady-state response but it determines
the time the neurons take to reach to their fixed point.
""";
###Output
_____no_output_____
###Markdown
Think!Above, we have numerically solved a system driven by a positive input and that, if $w_{EE} \neq 0$, receives an excitatory recurrent input (**extra challenge: try changing the value of $w_{EE}$ to a positive number and plotting the results of simulate_single**). Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that $r_E$(t) stays finite? - Which parameter would you change in order to increase the maximum value of the response?
###Code
# to_remove explanation
"""
Discussion:
1) As the F-I curve is bounded between zero and one, the system doesn't explode.
The f-curve guarantees this property
2) One way to increase the maximum response is to change the f-I curve. For
example, the ReLU is an unbounded function, and thus will increase the overall maximal
response of the network.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Fixed points of the single population system
###Code
# @title Video 2: Fixed point
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Ox3ELd1UFyo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=Ox3ELd1UFyo
###Markdown
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\displaystyle \frac{dr}{dt}=0$. We can find that the steady state of the Equation. (1) by setting $\displaystyle{\frac{dr}{dt}=0}$ and solve for $r$:$$-r_{\text{steady}} + F(w\cdot r_{\text{steady}} + I_{\text{ext}};a,\theta) = 0, \qquad (3)$$When it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system in Equation (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.From the Interactive Demo, one could also notice that the value of $\tau$ influences how quickly the activity will converge to the steady state from its initial value. In the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\tau$ in determining the convergence to the fixed point: $$\displaystyle{r(t) = \big{[}F(I_{\text{ext}};a,\theta) -r(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau}})} + r(t=0)$$ \\We can now numerically calculate the fixed point with a root finding algorithm. Exercise 2: Visualization of the fixed pointsWhen it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\displaystyle{\frac{dr}{dt}}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points. Here, let us, for example, set $w=5.0$ and $I^{\text{ext}}=0.5$. From Equation (1), you can obtain$$\frac{dr}{dt} = [-r + F(w\cdot r + I^{\text{ext}})]\,/\,\tau $$Then, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points.
###Code
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
#########################################################################
# TODO compute drdt and disable the error
raise NotImplementedError("Finish the compute_drdt function")
#########################################################################
# Calculate drdt
drdt = ...
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
# Uncomment to test your function
# drdt = compute_drdt(r, **pars)
# plot_dr_r(r, drdt)
# to_remove solution
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
# Calculate drdt
drdt = (-r + F(w * r + I_ext, a, theta)) / tau
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
with plt.xkcd():
plot_dr_r(r, drdt)
###Output
_____no_output_____
###Markdown
Exercise 3: Fixed point calculationWe will now find the fixed points numerically. To do so, we need to specif initial values ($r_{\text{guess}}$) for the root-finding algorithm to start from. From the line $\displaystyle{\frac{dr}{dt}}$ plotted above in Exercise 2, initial values can be chosen as a set of values close to where the line crosses zero on the y axis (real fixed point).The next cell defines three helper functions that we will use:- `my_fp_single(r_guess, **pars)` uses a root-finding algorithm to locate a fixed point near a given initial value- `check_fp_single(x_fp, **pars)`, verifies that the values of $r_{\rm fp}$ for which $\displaystyle{\frac{dr}{dt}} = 0$ are the true fixed points- `my_fp_finder(r_guess_vector, **pars)` accepts an array of initial values and finds the same number of fixed points, using the above two functions
###Code
# @markdown *Execute this cell to enable the fixed point functions*
def my_fp_single(r_guess, a, theta, w, I_ext, **other_pars):
"""
Calculate the fixed point through drE/dt=0
Args:
r_guess : Initial value used for scipy.optimize function
a, theta, w, I_ext : simulation parameters
Returns:
x_fp : value of fixed point
"""
# define the right hand of E dynamics
def my_WCr(x):
r = x
drdt = (-r + F(w * r + I_ext, a, theta))
y = np.array(drdt)
return y
x0 = np.array(r_guess)
x_fp = opt.root(my_WCr, x0).x.item()
return x_fp
def check_fp_single(x_fp, a, theta, w, I_ext, mytol=1e-4, **other_pars):
"""
Verify |dr/dt| < mytol
Args:
fp : value of fixed point
a, theta, w, I_ext: simulation parameters
mytol : tolerance, default as 10^{-4}
Returns :
Whether it is a correct fixed point: True/False
"""
# calculate Equation(3)
y = x_fp - F(w * x_fp + I_ext, a, theta)
# Here we set tolerance as 10^{-4}
return np.abs(y) < mytol
def my_fp_finder(pars, r_guess_vector, mytol=1e-4):
"""
Calculate the fixed point(s) through drE/dt=0
Args:
pars : Parameter dictionary
r_guess_vector : Initial values used for scipy.optimize function
mytol : tolerance for checking fixed point, default as 10^{-4}
Returns:
x_fps : values of fixed points
"""
x_fps = []
correct_fps = []
for r_guess in r_guess_vector:
x_fp = my_fp_single(r_guess, **pars)
if check_fp_single(x_fp, **pars, mytol=mytol):
x_fps.append(x_fp)
return x_fps
help(my_fp_finder)
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
#############################################################################
# TODO for students:
# Define initial values close to the intersections of drdt and y=0
# (How many initial values? Hint: How many times do the two lines intersect?)
# Calculate the fixed point with these initial values and plot them
#############################################################################
r_guess_vector = [...]
# Uncomment to test your values
# x_fps = my_fp_finder(pars, r_guess_vector)
# plot_dr_r(r, drdt, x_fps)
# to_remove solution
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
r_guess_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_guess_vector)
with plt.xkcd():
plot_dr_r(r, drdt, x_fps)
###Output
_____no_output_____
###Markdown
Interactive Demo: fixed points as a function of recurrent and external inputs.You can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\text{ext}}$ take different values. How does the number of fixed points change?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def plot_intersection_single(w, I_ext):
# set your parameters
pars = default_pars_single(w=w, I_ext=I_ext)
# find fixed points
r_init_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_init_vector)
# plot
r = np.linspace(0, 1., 1000)
drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau']
plot_dr_r(r, drdt, x_fps)
_ = widgets.interact(plot_intersection_single, w=(1, 7, 0.2),
I_ext=(0, 3, 0.1))
# to_remove explanation
"""
Discussion:
The fixed points of the single excitatory neuron population are determined by both
recurrent connections w and external input I_ext. In a previous interactive demo
we saw how the system showed two different steady-states when w = 0. But when w
doe not equal 0, for some range of w the system shows three fixed points (the middle
one being unstable) and the steady state depends on the initial conditions (i.e.
r at time zero.).
More on this will be explained in the next section.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this tutorial, we have investigated the dynamics of a rate-based single population of neurons.We learned about:- The effect of the input parameters and the time constant of the network on the dynamics of the population.- How to find the fixed point(s) of the system.Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:- How to determine the stability of a fixed point by linearizing the system.- How to add realistic inputs to our model. --- Bonus 1: Stability of a fixed point
###Code
# @title Video 3: Stability of fixed points
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="KKMlWWU83Jg", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=KKMlWWU83Jg
###Markdown
Initial values and trajectoriesHere, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
###Code
# @markdown Execute this cell to see the trajectories!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
plt.figure(figsize=(8, 5))
for ie in range(10):
pars['r_init'] = 0.1 * ie # set the initial value
r = simulate_single(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie,
label=r'r$_{\mathrm{init}}$=%.1f' % (0.1 * ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel(r'$r$(t)')
plt.legend(loc=[1.01, -0.06], fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Interactive Demo: dynamics as a function of the initial valueLet's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
pars = default_pars_single(w=5.0, I_ext=0.5)
def plot_single_diffEinit(r_init):
pars['r_init'] = r_init
r = simulate_single(pars)
plt.figure()
plt.plot(pars['range_t'], r, 'b', zorder=1)
plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)
plt.xlabel('t (ms)', fontsize=16)
plt.ylabel(r'$r(t)$', fontsize=16)
plt.ylim(0, 1.0)
plt.show()
_ = widgets.interact(plot_single_diffEinit, r_init=(0, 1, 0.02))
# to_remove explanation
"""
Discussion:
To better appreciate what is happening here, you should go back to the previous
interactive demo. Set the w = 5 and I_ext = 0.5.
You will find that there are three fixed points of the system for these values of
w and I_ext. Now, choose the initial value in this demo and see in which direction
the system output moves. When r_init is in the vicinity of the leftmost fixed points
it moves towards the left most fixed point. When r_init is in the vicinity of the
rightmost fixed points it moves towards the rightmost fixed point.
""";
###Output
_____no_output_____
###Markdown
Stability analysis via linearization of the dynamicsJust like Equation $1$ in the case ($w=0$) discussed above, a generic linear system $$\frac{dx}{dt} = \lambda (x - b),$$ has a fixed point for $x=b$. The analytical solution of such a system can be found to be:$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$ Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as: $$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, "**unstable**" . Compute the stability of Equation $1$Similar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\epsilon$, i.e. $r = r^{*} + \epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:\begin{align}\tau \frac{d\epsilon}{dt} \approx -\epsilon + w F'(w\cdot r^{*} + I_{\text{ext}};a,\theta) \epsilon \end{align}where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:\begin{align}\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau }[-1 + w F'(w\cdot r^* + I_{\text{ext}};a,\theta)] \end{align}That is, as in the linear system above, the value of$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau \qquad (4)$$determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system. Exercise 4: Compute $dF$The derivative of the sigmoid transfer function is:\begin{align} \frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \qquad (5)\end{align}Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
###Code
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
###########################################################################
# TODO for students: compute dFdx ##
raise NotImplementedError("Student excercise: compute the deravitive of F")
###########################################################################
# Calculate the population activation
dFdx = ...
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# df = dF(x, pars['a'], pars['theta'])
# plot_dFdt(x, df)
# to_remove solution
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
# Calculate the population activation
dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
df = dF(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_dFdt(x, df)
###Output
_____no_output_____
###Markdown
Exercise 5: Compute eigenvaluesAs discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?Note that the expression of the eigenvalue at fixed point $r^*$$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau$$
###Code
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
#####################################################################
## TODO for students: compute eigenvalue and disable the error
raise NotImplementedError("Student excercise: compute the eigenvalue")
######################################################################
# Compute the eigenvalue
eig = ...
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
# Uncomment below lines after completing the eig_single function.
# for fp in x_fp:
# eig_fp = eig_single(fp, **pars)
# print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
###Output
_____no_output_____
###Markdown
**SAMPLE OUTPUT**```Fixed point1 at 0.042 with Eigenvalue=-0.583Fixed point2 at 0.447 with Eigenvalue=0.498Fixed point3 at 0.900 with Eigenvalue=-0.626```
###Code
# to_remove solution
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
# Compute the eigenvalue
eig = (-1. + w * dF(w * fp + I_ext, a, theta)) / tau
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
for fp in x_fp:
eig_fp = eig_single(fp, **pars)
print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
###Output
Fixed point1 at 0.042 with Eigenvalue=-0.583
Fixed point1 at 0.447 with Eigenvalue=0.498
Fixed point1 at 0.900 with Eigenvalue=-0.626
###Markdown
Think! Throughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$?
###Code
# to_remove explanation
"""
Discussion:
You can check this by going back the second last interactive demo and set the
weight to w<0. You will notice that the system has only one fixed point and that
is at zero value. For this particular dynamics, the system will eventually converge
to zero. But try it out.
""";
###Output
_____no_output_____
###Markdown
--- Bonus 2: Noisy input drives the transition between two stable states Ornstein-Uhlenbeck (OU) processAs discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows: $$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
###Code
# @title OU process `my_OU(pars, sig, myseed=False)`
# @markdown Make sure you execute this cell to visualize the noise!
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt - 1):
I_ou[it + 1] = (I_ou[it]
+ dt / tau_ou * (0. - I_ou[it])
+ np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1])
return I_ou
pars = default_pars_single(T=100)
pars['tau_ou'] = 1. # [ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=2020)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'r')
plt.xlabel('t (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$')
plt.show()
###Output
_____no_output_____
###Markdown
Example: Up-Down transitionIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
###Code
# @title Simulation of an E population with OU inputs
# @markdown Make sure you execute this cell to spot the Up-Down states!
pars = default_pars_single(T=1000)
pars['w'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. # [ms]
pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
r = simulate_single(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], r, 'b', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel(r'$r(t)$')
plt.show()
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 3, Day 2, Tutorial 1 Neuronal Network Dynamics: Neural Rate Models__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom --- Tutorial ObjectivesThe brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks. The activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.In this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons. **Steps:**- Write the equation for the firing rate dynamics of a 1D excitatory population.- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. - Investigate the stability of the fixed points by linearizing the dynamics around them. --- Setup
###Code
# Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt # root-finding algorithm
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6, 4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
def plot_dr_r(r, drdt, x_fps=None):
plt.figure()
plt.plot(r, drdt, 'k')
plt.plot(r, 0. * r, 'k--')
if x_fps is not None:
plt.plot(x_fps, np.zeros_like(x_fps), "ko", ms=12)
plt.xlabel(r'$r$')
plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x, dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('dF(x)', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Neuronal network dynamics
###Code
# @title Video 1: Dynamic networks
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="p848349hPyw", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=p848349hPyw
###Markdown
Section 1.1: Dynamics of a single excitatory populationIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:\begin{align}\tau \frac{dr}{dt} &= -r + F(w\cdot r + I_{\text{ext}}) \quad\qquad (1)\end{align}$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\text{ext}}$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.To start building the model, please execute the cell below to initialize the simulation parameters.
###Code
# @markdown *Execute this cell to set default parameters for a single excitatory population model*
def default_pars_single(**kwargs):
pars = {}
# Excitatory parameters
pars['tau'] = 1. # Timescale of the E population [ms]
pars['a'] = 1.2 # Gain of the E population
pars['theta'] = 2.8 # Threshold of the E population
# Connection strength
pars['w'] = 0. # E to E, we first set it to 0
# External input
pars['I_ext'] = 0.
# simulation parameters
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['r_init'] = 0.2 # Initial value of E
# External parameters if any
pars.update(kwargs)
# Vector of discretized time points [ms]
pars['range_t'] = np.arange(0, pars['T'], pars['dt'])
return pars
###Output
_____no_output_____
###Markdown
You can now use:- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step- To update an existing parameter dictionary, use `pars['New_para'] = value`Because `pars` is a dictionary, it can be passed to a function that requires individual parameters as arguments using `my_func(**pars)` syntax. Section 1.2: F-I curvesIn electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.The transfer function $F(\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$. Exercise 1: Implement F-I curve Let's first investigate the activation functions before simulating the dynamics of the entire population. In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
###Code
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################
## TODO for students: compute f = F(x) ##
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the f-I function")
#################################################
# Define the sigmoidal transfer function f = F(x)
f = ...
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# f = F(x, pars['a'], pars['theta'])
# plot_fI(x, f)
# to_remove solution
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
# Define the sigmoidal transfer function f = F(x)
f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
f = F(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_fI(x, f)
###Output
_____no_output_____
###Markdown
Interactive Demo: Parameter exploration of F-I curveHere's an interactive demo that shows how the F-I curve changes for different values of the gain and threshold parameters. How do the gain and threshold parameters affect the F-I curve?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def interactive_plot_FI(a, theta):
"""
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
"""
# set the range of input
x = np.arange(0, 10, .1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
_ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))
# to_remove explanation
"""
Discussion:
For the function we have chosen to model the F-I curve (eq 2),
- a determines the slope (gain) of the rising phase of the F-I curve
- theta determines the input at which the function F(x) reaches its mid-value (0.5).
That is, theta shifts the F-I curve along the horizontal axis.
For our neurons we are using in this tutorial:
- a controls the gain of the neuron population
- theta controls the threshold at which the neuron population starts to respond
""";
###Output
_____no_output_____
###Markdown
Section 1.3: Simulation scheme of E dynamicsBecause $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:\begin{align}&\frac{dr}{dt} \approx \frac{r[k+1]-r[k]}{\Delta t} \end{align}where $r[k] = r(k\Delta t)$. Thus,$$\Delta r[k] = \frac{\Delta t}{\tau}[-r[k] + F(w\cdot r[k] + I_{\text{ext}}(k;a,\theta))]$$Hence, Equation (1) is updated at each time step by:$$r[k+1] = r[k] + \Delta r[k]$$
###Code
# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`*
def simulate_single(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
"""
# Set parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
r_init = pars['r_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
r = np.zeros(Lt)
r[0] = r_init
I_ext = I_ext * np.ones(Lt)
# Update the E activity
for k in range(Lt - 1):
dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta))
r[k+1] = r[k] + dr
return r
help(simulate_single)
###Output
Help on function simulate_single in module __main__:
simulate_single(pars)
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
###Markdown
Interactive Demo: Parameter Exploration of single population dynamicsNote that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\text{ext}}$. Explore these dynamics in this interactive demo.How does $r_{\text{sim}}(t)$ change with different $I_{\text{ext}}$ values? How does it change with different $\tau$ values? Investigate the relationship between $F(I_{\text{ext}}; a, \theta)$ and the steady value of $r(t)$. Note that, $r_{\rm ana}(t)$ denotes the analytical solution - you will learn how this is computed in the next section.
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
# get default parameters
pars = default_pars_single(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau):
# set external input and time constant
pars['I_ext'] = I_ext
pars['tau'] = tau
# simulation
r = simulate_single(pars)
# Analytical Solution
r_ana = (pars['r_init']
+ (F(I_ext, pars['a'], pars['theta'])
- pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))
# plot
plt.figure()
plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5,
zorder=1)
plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),
label=r'$r_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'],
F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),
'k--', label=r'$F(I_{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('Activity r(t)', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),
tau=(1., 5., 0.2))
# to_remove explanation
"""
Discussion:
Given the choice of F-I curve (eq 2) and dynamics of the neuron population (eq. 1)
the neurons have two fixed points or steady-state responses irrespective of the input.
- Weak inputs to the neurons eventually result in the activity converging to zero
- Strong inputs to the neurons eventually result in the activity converging to max value
The time constant tau, does not affect the steady-state response but it determines
the time the neurons take to reach to their fixed point.
""";
###Output
_____no_output_____
###Markdown
Think!Above, we have numerically solved a system driven by a positive input and that, if $w_{EE} \neq 0$, receives an excitatory recurrent input (**extra challenge: try changing the value of $w_{EE}$ to a positive number and plotting the results of simulate_single**). Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that $r_E$(t) stays finite? - Which parameter would you change in order to increase the maximum value of the response?
###Code
# to_remove explanation
"""
Discussion:
1) As the F-I curve is bounded between zero and one, the system doesn't explode.
The f-curve guarantees this property
2) One way to increase the maximum response is to change the f-I curve. For
example, the ReLU is an unbounded function, and thus will increase the overall maximal
response of the network.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Fixed points of the single population system
###Code
# @title Video 2: Fixed point
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Ox3ELd1UFyo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=Ox3ELd1UFyo
###Markdown
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\displaystyle \frac{dr}{dt}=0$. We can find that the steady state of the Equation. (1) by setting $\displaystyle{\frac{dr}{dt}=0}$ and solve for $r$:$$-r_{\text{steady}} + F(w\cdot r_{\text{steady}} + I_{\text{ext}};a,\theta) = 0, \qquad (3)$$When it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system in Equation (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.From the Interactive Demo, one could also notice that the value of $\tau$ influences how quickly the activity will converge to the steady state from its initial value. In the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\tau$ in determining the convergence to the fixed point: $$\displaystyle{r(t) = \big{[}F(I_{\text{ext}};a,\theta) -r(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau}})} + r(t=0)$$ \\We can now numerically calculate the fixed point with a root finding algorithm. Exercise 2: Visualization of the fixed pointsWhen it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\displaystyle{\frac{dr}{dt}}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points. Here, let us, for example, set $w=5.0$ and $I^{\text{ext}}=0.5$. From Equation (1), you can obtain$$\frac{dr}{dt} = [-r + F(w\cdot r + I^{\text{ext}})]\,/\,\tau $$Then, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points.
###Code
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
#########################################################################
# TODO compute drdt and disable the next line
raise NotImplementedError("Finish the compute_drdt function")
#########################################################################
# Calculate drdt
drdt = ...
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
# Uncomment to test your function
# drdt = compute_drdt(r, **pars)
# plot_dr_r(r, drdt)
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
# Calculate drdt
drdt = -r + F(w * r + I_ext, a, theta) / tau
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
with plt.xkcd():
plot_dr_r(r, drdt)
###Output
_____no_output_____
###Markdown
Exercise 3: Fixed point calculationWe will now find the fixed points numerically. To do so, we need to specify proper initial values ($r_{\text{init}}$). From the line $\displaystyle{\frac{dr}{dt}}$ plotted above in Exercise 2, initial values can be chosen as a set of values close to where the line crosses zero on the y axis (real fixed point).The next cell defines three helper functions that we will use:- `my_fp_single(pars, r_init)` uses a root-finding algorithm to locate a fixed point near a given initial value- `check_fp_single(pars, x_fp)`, verifies that the values of $r_{\rm fp}$ for which $\displaystyle{\frac{dr}{dt}} = 0$ are the true fixed points- `my_fp_finder(pars, r_init_vector)` accepts an array of initial values and finds the same number of fixed points, using the above two functions
###Code
# @markdown *Execute this cell to enable the fixed point functions*
def my_fp_single(pars, r_init):
"""
Calculate the fixed point through drE/dt=0
Args:
pars : Parameter dictionary
r_init : Initial value used for scipy.optimize function
Returns:
x_fp : value of fixed point
"""
# get the parameters
a, theta = pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
# define the right hand of E dynamics
def my_WCr(x):
r = x
drdt = (-r + F(w * r + I_ext, a, theta))
y = np.array(drdt)
return y
x0 = np.array(r_init)
x_fp = opt.root(my_WCr, x0).x
return x_fp
def check_fp_single(pars, x_fp, mytol=1e-4):
"""
Verify |dr/dt| < mytol
Args:
pars : Parameter dictionary
fp : value of fixed point
mytol : tolerance, default as 10^{-4}
Returns :
Whether it is a correct fixed point: True/False
"""
a, theta = pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
# calculate Equation(3)
y = x_fp - F(w * x_fp + I_ext, a, theta)
# Here we set tolerance as 10^{-4}
return np.abs(y) < mytol
def my_fp_finder(pars, r_init_vector, mytol=1e-4):
"""
Calculate the fixed point(s) through drE/dt=0
Args:
pars : Parameter dictionary
r_init_vector : Initial values used for scipy.optimize function
mytol : tolerance for checking fixed point, default as 10^{-4}
Returns:
x_fps : values of fixed points
"""
x_fps = []
correct_fps = []
for r_init in r_init_vector:
x_fp = my_fp_single(pars, r_init)
if check_fp_single(pars, x_fp, mytol):
x_fps.append(x_fp)
return x_fps
help(my_fp_finder)
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
#############################################################################
# TODO for students:
# Define initial values close to the intersections of drdt and y=0
# (How many initial values? Hint: How many times do the two lines intersect?)
# Calculate the fixed point with these initial values and plot them
#############################################################################
r_init_vector = [...]
# Uncomment to test your values
# x_fps = my_fp_finder(pars, r_init_vector)
# plot_dr_r(r, drdt, x_fps)
# to_remove solution
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
r_init_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_init_vector)
with plt.xkcd():
plot_dr_r(r, drdt, x_fps)
###Output
_____no_output_____
###Markdown
Interactive Demo: fixed points as a function of recurrent and external inputs.You can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\text{ext}}$ take different values. How does the number of fixed points change?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def plot_intersection_single(w, I_ext):
# set your parameters
pars = default_pars_single(w=w, I_ext=I_ext)
# find fixed points
r_init_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_init_vector)
# plot
r = np.linspace(0, 1., 1000)
drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau']
plot_dr_r(r, drdt, x_fps)
_ = widgets.interact(plot_intersection_single, w=(1, 7, 0.2),
I_ext=(0, 3, 0.1))
# to_remove explanation
"""
Discussion:
The fixed points of the single excitatory neuron population are determined by both
recurrent connections w and external input I_ext. In a previous interactive demo
we saw how the system showed two different steady-states when w = 0. But when w
doe not equal 0, for some range of w the system shows three fixed points (the middle
one being unstable) and the steady state depends on the initial conditions (i.e.
r at time zero.).
More on this will be explained in the next section.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this tutorial, we have investigated the dynamics of a rate-based single population of neurons.We learned about:- The effect of the input parameters and the time constant of the network on the dynamics of the population.- How to find the fixed point(s) of the system.Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:- How to determine the stability of a fixed point by linearizing the system.- How to add realistic inputs to our model. --- Bonus 1: Stability of a fixed point
###Code
# @title Video 3: Stability of fixed points
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="KKMlWWU83Jg", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=KKMlWWU83Jg
###Markdown
Initial values and trajectoriesHere, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
###Code
# @markdown Execute this cell to see the trajectories!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
plt.figure(figsize=(8, 5))
for ie in range(10):
pars['r_init'] = 0.1 * ie # set the initial value
r = simulate_single(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie,
label=r'r$_{\mathrm{init}}$=%.1f' % (0.1 * ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel(r'$r$(t)')
plt.legend(loc=[1.01, -0.06], fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Interactive Demo: dynamics as a function of the initial valueLet's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
def plot_single_diffEinit(r_init):
pars['r_init'] = r_init
r = simulate_single(pars)
plt.figure()
plt.plot(pars['range_t'], r, 'b', zorder=1)
plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)
plt.xlabel('t (ms)', fontsize=16)
plt.ylabel(r'$r(t)$', fontsize=16)
plt.ylim(0, 1.0)
plt.show()
_ = widgets.interact(plot_single_diffEinit, r_init=(0, 1, 0.02))
"""
Discussion:
To better appreciate what is happening here, you should go back to the previous
interactive demo. Set the w = 5 and I_ext = 0.5.
You will find that there are three fixed points of the system for these values of
w and I_ext. Now, choose the initial value in this demo and see in which direction
the system output moves. When r_init is in the vicinity of the leftmost fixed points
it moves towards the left most fixed point. When r_init is in the vicinity of the
rightmost fixed points it moves towards the rightmost fixed point.
""";
###Output
_____no_output_____
###Markdown
Stability analysis via linearization of the dynamicsJust like Equation $1$ in the case ($w=0$) discussed above, a generic linear system $$\frac{dx}{dt} = \lambda (x - b),$$ has a fixed point for $x=b$. The analytical solution of such a system can be found to be:$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$ Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as: $$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, "**unstable**" . Compute the stability of Equation $1$Similar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\epsilon$, i.e. $r = r^{*} + \epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:\begin{align}\tau \frac{d\epsilon}{dt} \approx -\epsilon + w F'(w\cdot r^{*} + I_{\text{ext}};a,\theta) \epsilon \end{align}where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:\begin{align}\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau }[-1 + w F'(w\cdot r^* + I_{\text{ext}};a,\theta)] \end{align}That is, as in the linear system above, the value of$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau \qquad (4)$$determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system. Exercise 4: Compute $dF$ and EigenvalueThe derivative of the sigmoid transfer function is:\begin{align} \frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \qquad (5)\end{align}Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
###Code
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
###########################################################################
# TODO for students: compute dFdx ##
raise NotImplementedError("Student excercise: compute the deravitive of F")
###########################################################################
# Calculate the population activation
dFdx = ...
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# df = dF(x, pars['a'], pars['theta'])
# plot_dFdt(x, df)
# to_remove solution
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
# Calculate the population activation
dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
df = dF(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_dFdt(x, df)
###Output
_____no_output_____
###Markdown
Exercise 5: Compute eigenvaluesAs discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?Note that the expression of the eigenvalue at fixed point $r^*$$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau$$
###Code
def eig_single(pars, fp):
"""
Args:
pars : Parameter dictionary
fp : fixed point r_fp
Returns:
eig : eigevalue of the linearized system
"""
# get the parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w, I_ext = pars['w'], pars['I_ext']
print(tau, a, theta, w, I_ext)
#################################################
## TODO for students: compute eigenvalue ##
raise NotImplementedError("Student excercise: compute the eigenvalue")
#################################################
# Compute the eigenvalue
eig = ...
return eig
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
# Uncomment below lines after completing the eig_single function.
# Find the eigenvalues for all fixed points of Exercise 2
# x_fp_1 = my_fp_single(pars, 0.).item()
# eig_fp_1 = eig_single(pars, x_fp_1).item()
# print(f'Fixed point1 at {x_fp_1:.3f} with Eigenvalue={eig_fp_1:.3f}')
# x_fp_2 = my_fp_single(pars, 0.4).item()
# eig_fp_2 = eig_single(pars, x_fp_2).item()
# print(f'Fixed point2 at {x_fp_2:.3f} with Eigenvalue={eig_fp_2:.3f}')
# x_fp_3 = my_fp_single(pars, 0.9).item()
# eig_fp_3 = eig_single(pars, x_fp_3).item()
# print(f'Fixed point3 at {x_fp_3:.3f} with Eigenvalue={eig_fp_3:.3f}')
###Output
_____no_output_____
###Markdown
**SAMPLE OUTPUT**```Fixed point1 at 0.042 with Eigenvalue=-0.583Fixed point2 at 0.447 with Eigenvalue=0.498Fixed point3 at 0.900 with Eigenvalue=-0.626```
###Code
# to_remove solution
def eig_single(pars, fp):
"""
Args:
pars : Parameter dictionary
fp : fixed point r_fp
Returns:
eig : eigevalue of the linearized system
"""
# get the parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w, I_ext = pars['w'], pars['I_ext']
print(tau, a, theta, w, I_ext)
# Compute the eigenvalue
eig = (-1. + w * dF(w * fp + I_ext, a, theta)) / tau
return eig
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
# Find the eigenvalues for all fixed points of Exercise 2
x_fp_1 = my_fp_single(pars, 0.).item()
eig_fp_1 = eig_single(pars, x_fp_1).item()
print(f'Fixed point1 at {x_fp_1:.3f} with Eigenvalue={eig_fp_1:.3f}')
x_fp_2 = my_fp_single(pars, 0.4).item()
eig_fp_2 = eig_single(pars, x_fp_2).item()
print(f'Fixed point2 at {x_fp_2:.3f} with Eigenvalue={eig_fp_2:.3f}')
x_fp_3 = my_fp_single(pars, 0.9).item()
eig_fp_3 = eig_single(pars, x_fp_3).item()
print(f'Fixed point3 at {x_fp_3:.3f} with Eigenvalue={eig_fp_3:.3f}')
###Output
1.0 1.2 2.8 5.0 0.5
Fixed point1 at 0.042 with Eigenvalue=-0.583
1.0 1.2 2.8 5.0 0.5
Fixed point2 at 0.447 with Eigenvalue=0.498
1.0 1.2 2.8 5.0 0.5
Fixed point3 at 0.900 with Eigenvalue=-0.626
###Markdown
Think! Throughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$?
###Code
# to_remove explanation
"""
Discussion:
You can check this by going back the second last interactive demo and set the
weight to w<0. You will notice that the system has only one fixed point and that
is at zero value. For this particular dynamics, the system will eventually converge
to zero. But try it out.
""";
###Output
_____no_output_____
###Markdown
--- Bonus 2: Noisy input drives the transition between two stable states Ornstein-Uhlenbeck (OU) processAs discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows: $$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
###Code
# @title OU process `my_OU(pars, sig, myseed=False)`
# @markdown Make sure you execute this cell to visualize the noise!
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt - 1):
I_ou[it + 1] = (I_ou[it]
+ dt / tau_ou * (0. - I_ou[it])
+ np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1])
return I_ou
pars = default_pars_single(T=100)
pars['tau_ou'] = 1. # [ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=2020)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'r')
plt.xlabel('t (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$')
plt.show()
###Output
_____no_output_____
###Markdown
Example: Up-Down transitionIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
###Code
# @title Simulation of an E population with OU inputs
# @markdown Make sure you execute this cell to spot the Up-Down states!
pars = default_pars_single(T=1000)
pars['w'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. # [ms]
pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
r = simulate_single(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], r, 'b', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel(r'$r(t)$')
plt.show()
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 3, Day 2, Tutorial 1 Neuronal Network Dynamics: Neural Rate Models__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Spiros Chavlis, Lorenzo Fontolan, Richard Gao, Maryam Vaziri-Pashkam,Michael Waskom --- Tutorial ObjectivesThe brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks. The activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.In this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons. **Steps:**- Write the equation for the firing rate dynamics of a 1D excitatory population.- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. - Investigate the stability of the fixed points by linearizing the dynamics around them. --- Setup
###Code
# Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt # root-finding algorithm
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6, 4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
def plot_dr_r(r, drdt):
plt.figure()
plt.plot(r, drdt, 'k')
plt.plot(r, 0. * r, 'k--')
plt.xlabel(r'$r$')
plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x, dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('dF(x)', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Section 1: Neuronal network dynamics
###Code
# @title Video 1: Dynamic networks
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="p848349hPyw", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=p848349hPyw
###Markdown
Section 1.1: Dynamics of a single excitatory populationIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:\begin{align}\tau \frac{dr}{dt} &= -r + F(w\cdot r + I_{\text{ext}}) \quad\qquad (1)\end{align}$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\text{ext}}$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.To start building the model, please execute the cell below to initialize the simulation parameters.
###Code
# @title Default parameters for a single excitatory population model
def default_pars_single(**kwargs):
pars = {}
# Excitatory parameters
pars['tau'] = 1. # Timescale of the E population [ms]
pars['a'] = 1.2 # Gain of the E population
pars['theta'] = 2.8 # Threshold of the E population
# Connection strength
pars['w'] = 0. # E to E, we first set it to 0
# External input
pars['I_ext'] = 0.
# simulation parameters
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['r_init'] = 0.2 # Initial value of E
# External parameters if any
for k in kwargs:
pars[k] = kwargs[k]
# Vector of discretized time points [ms]
pars['range_t'] = np.arange(0, pars['T'], pars['dt'])
return pars
###Output
_____no_output_____
###Markdown
You can use:- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step- After `pars = default_pars_single()`, use `pars['New_para'] = value` to add an new parameter with its value Section 1.2: F-I curvesIn electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.The transfer function $F(\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$. Exercise 1: Implement F-I curve Let's first investigate the activation functions before simulating the dynamics of the entire population. In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
###Code
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################
## TODO for students: compute f = F(x) ##
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the f-I function")
#################################################
# add the expression of f = F(x)
f = ...
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# f = F(x, pars['a'], pars['theta'])
# plot_fI(x, f)
# to_remove solution
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
# add the expression of f = F(x)
f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
f = F(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_fI(x, f)
###Output
_____no_output_____
###Markdown
Interactive Demo: Parameter exploration of F-I curveHere's an interactive demo that shows how the F-I curve is changing for different values of the gain and threshold parameters.**Remember to enable the demo by running the cell.**
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def interactive_plot_FI(a, theta):
"""
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
"""
# set the range of input
x = np.arange(0, 10, .1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
_ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))
###Output
_____no_output_____
###Markdown
Section 1.3: Simulation scheme of E dynamicsBecause $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:\begin{align}&\frac{dr}{dt} \approx \frac{r[k+1]-r[k]}{\Delta t} \end{align}where $r[k] = r(k\Delta t)$. Thus,$$\Delta r[k] = \frac{\Delta t}{\tau}[-r[k] + F(w\cdot r[k] + I_{\text{ext}}(k;a,\theta))]$$Hence, Equation (1) is updated at each time step by:$$r[k+1] = r[k] + \Delta r[k]$$**_Please execute the following cell to enable the WC simulator_**
###Code
# @title Single population rate model simulator: `simulate_single`
def simulate_single(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
"""
# Set parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
r_init = pars['r_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
r = np.zeros(Lt)
r[0] = r_init
I_ext = I_ext * np.ones(Lt)
# Update the E activity
for k in range(Lt - 1):
dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta))
r[k+1] = r[k] + dr
return r
print(help(simulate_single))
###Output
Help on function simulate_single in module __main__:
simulate_single(pars)
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
None
###Markdown
Interactive Demo: Parameter Exploration of single population dynamicsNote that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics is entirely determined by the external input $I_{\text{ext}}$. Try to explore how $r_{\text{sim}}(t)$ changes with different $I_{\text{ext}}$ and $\tau$ parameter values, and investigate the relationship between $F(I_{\text{ext}}; a, \theta)$ and the steady value of $r(t)$. Note that, $r_{\rm ana}(t)$ denotes the analytical solution.
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
# get default parameters
pars = default_pars_single(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau):
# set external input and time constant
pars['I_ext'] = I_ext
pars['tau'] = tau
# simulation
r = simulate_single(pars)
# Analytical Solution
r_ana = (pars['r_init']
+ (F(I_ext, pars['a'], pars['theta'])
- pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))
# plot
plt.figure()
plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5,
zorder=1)
plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),
label=r'$r_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'],
F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),
'k--', label=r'$F(I_{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('Activity r(t)', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),
tau=(1., 5., 0.2))
###Output
_____no_output_____
###Markdown
Think!Above, we have numerically solved a system driven by a positive input and that, if $w_{EE} \neq 0$, receives an excitatory recurrent input (**try changing the value of $w_{EE}$ to a positive number**). Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that $r_E$(t) stays finite? - Which parameter would you change in order to increase the maximum value of the response? Section 2: Fixed points of the single population system
###Code
# @title Video 2: Fixed point
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Ox3ELd1UFyo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=Ox3ELd1UFyo
###Markdown
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\displaystyle \frac{dr}{dt}=0$. We can find that the steady state of the Equation. (1) by setting $\displaystyle{\frac{dr}{dt}=0}$ and solve for $r$:$$-r_{\text{steady}} + F(w\cdot r_{\text{steady}} + I_{\text{ext}};a,\theta) = 0, \qquad (3)$$When it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system Equation. (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.From the Interactive Demo, one could also notice that the value of $\tau$ influences how quickly the activity will converge to the steady state from its initial value. In the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\tau$ in determining the convergence to the fixed point: $$\displaystyle{r(t) = \big{[}F(I_{\text{ext}};a,\theta) -r(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau}})} + r(t=0)$$ \\We can now numerically calculate the fixed point with the `scipy.optimize.root` function._(Recall that at the very beginning, we `import scipy.optimize as opt` )_.\\Please execute the cell below to define the functions `my_fp_single` and `check_fp_single`
###Code
# @title Function of calculating the fixed point
# @markdown Make sure you execute this cell to enable the function!
def my_fp_single(pars, r_init):
"""
Calculate the fixed point through drE/dt=0
Args:
pars : Parameter dictionary
r_init : Initial value used for scipy.optimize function
Returns:
x_fp : value of fixed point
"""
# get the parameters
a, theta = pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
# define the right hand of E dynamics
def my_WCr(x):
r = x
drdt = (-r + F(w * r + I_ext, a, theta))
y = np.array(drdt)
return y
x0 = np.array(r_init)
x_fp = opt.root(my_WCr, x0).x
return x_fp
print(help(my_fp_single))
def check_fp_single(pars, x_fp, mytol=1e-4):
"""
Verify |dr/dt| < mytol
Args:
pars : Parameter dictionary
fp : value of fixed point
mytol : tolerance, default as 10^{-4}
Returns :
Whether it is a correct fixed point: True/False
"""
a, theta = pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
# calculate Equation(3)
y = x_fp - F(w * x_fp + I_ext, a, theta)
# Here we set tolerance as 10^{-4}
return np.abs(y) < mytol
print(help(check_fp_single))
###Output
Help on function my_fp_single in module __main__:
my_fp_single(pars, r_init)
Calculate the fixed point through drE/dt=0
Args:
pars : Parameter dictionary
r_init : Initial value used for scipy.optimize function
Returns:
x_fp : value of fixed point
None
Help on function check_fp_single in module __main__:
check_fp_single(pars, x_fp, mytol=0.0001)
Verify |dr/dt| < mytol
Args:
pars : Parameter dictionary
fp : value of fixed point
mytol : tolerance, default as 10^{-4}
Returns :
Whether it is a correct fixed point: True/False
None
###Markdown
Exercise 2: Visualization of the fixed pointWhen it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\displaystyle{\frac{dr}{dt}=0}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points. Here, let us, for example, set $w=5.0$ and $I^{\text{ext}}=0.5$. From Equation (1), you can obtain$$\frac{dr}{dt} = [-r + F(w\cdot r + I^{\text{ext}})]/\tau $$Then, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points. Finally, try to find the fixed points using the previously defined function `my_fp_single(pars, r_init)` with proper initial values ($r_{\text{init}}$). You can use the previously defined function `check_fp_single(pars, x_fp)` to verify that the values of $r{\rm fp}$ for which $\displaystyle{\frac{dr}{dt}} = 0$ are the true fixed points. From the line $\displaystyle{\frac{dr}{dt}}$ plotted above, the proper initial values can be chosen as the values close to where the line crosses zero on the y axis (real fixed point).
###Code
pars = default_pars_single() # get default parameters
# set your external input and wEE
pars['I_ext'] = 0.5
pars['w'] = 5.0
r = np.linspace(0, 1, 1000) # give the values of r
# Calculate drEdt
# drdt = ...
# Uncomment this to plot the drdt across r
# plot_dr_r(r, drdt)
################################################################
# TODO for students:
# Find the values close to the intersections of drdt and y=0
# as your initial values
# Calculate the fixed point with your initial value, verify them,
# and plot the corret ones
# check if x_fp is the intersection of the lines with the given function
# check_fpE(pars, x_fp)
# vary different initial values to find the correct fixed point (Should be 3)
# Use blue, red and yellow colors, respectively ('b', 'r', 'y' codenames)
################################################################
# Calculate the first fixed point with your initial value
# x_fp_1 = my_fp_single(pars, ...)
# if check_fp_single(pars, x_fp_1):
# plt.plot(x_fp_1, 0, 'bo', ms=8)
# Calculate the second fixed point with your initial value
# x_fp_2 = my_fp_single(pars, ...)
# if check_fp_single(pars, x_fp_2):
# plt.plot(x_fp_2, 0, 'ro', ms=8)
# Calculate the third fixed point with your initial value
# x_fp_3 = my_fp_single(pars, ...)
# if check_fp_single(pars, x_fp_3):
# plt.plot(x_fp_3, 0, 'yo', ms=8)
# to_remove solution
pars = default_pars_single() # get default parameters
# set your external input and wEE
pars['I_ext'] = 0.5
pars['w'] = 5.0
r = np.linspace(0, 1, 1000) # give the values of r
# Calculate drEdt
drdt = (-r + F(pars['w'] * r + pars['I_ext'],
pars['a'], pars['theta'])) / pars['tau']
with plt.xkcd():
plot_dr_r(r, drdt)
# Calculate the first fixed point with your initial value
x_fp_1 = my_fp_single(pars, 0.)
if check_fp_single(pars, x_fp_1):
plt.plot(x_fp_1, 0, 'bo', ms=8)
# Calculate the second fixed point with your initial value
x_fp_2 = my_fp_single(pars, 0.4)
if check_fp_single(pars, x_fp_2):
plt.plot(x_fp_2, 0, 'ro', ms=8)
# Calculate the third fixed point with your initial value
x_fp_3 = my_fp_single(pars, 0.9)
if check_fp_single(pars, x_fp_3):
plt.plot(x_fp_3, 0, 'yo', ms=8)
plt.show()
###Output
_____no_output_____
###Markdown
Interactive Demo: fixed points as a function of recurrent and external inputs.You can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\text{ext}}$ take different values.
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def plot_intersection_single(w, I_ext):
# set your parameters
pars['w'] = w
pars['I_ext'] = I_ext
# note that wEE!=0
if w > 0:
# find fixed point
x_fp_1 = my_fp_single(pars, 0.)
x_fp_2 = my_fp_single(pars, 0.4)
x_fp_3 = my_fp_single(pars, 0.9)
plt.figure()
r = np.linspace(0, 1., 1000)
drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau']
plt.plot(r, drdt, 'k')
plt.plot(r, 0. * r, 'k--')
if check_fp_single(pars, x_fp_1):
plt.plot(x_fp_1, 0, 'bo', ms=8)
if check_fp_single(pars, x_fp_2):
plt.plot(x_fp_2, 0, 'ro', ms=8)
if check_fp_single(pars, x_fp_3):
plt.plot(x_fp_3, 0, 'yo', ms=8)
plt.xlabel(r'$r$', fontsize=14.)
plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20.)
plt.show()
_ = widgets.interact(plot_intersection_single, w=(1, 7, 0.2),
I_ext=(0, 3, 0.1))
###Output
_____no_output_____
###Markdown
--- SummaryIn this tutorial, we have investigated the dynamics of a rate-based single population of neurons.We learned about:- The effect of the input parameters and the time constant of the network on the dynamics of the population.- How to find the fixed point(s) of the system.Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:- How to determine the stability of a fixed point by linearizing the system.- How to add realistic inputs to our model. --- Bonus 1: Stability of a fixed point
###Code
# @title Video 3: Stability of fixed points
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="KKMlWWU83Jg", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=KKMlWWU83Jg
###Markdown
Initial values and trajectoriesHere, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
###Code
# @title Initial values
# @markdown Make sure you execute this cell to see the trajectories!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
plt.figure(figsize=(8, 5))
for ie in range(10):
pars['r_init'] = 0.1 * ie # set the initial value
r = simulate_single(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie,
label=r'r$_{\mathrm{init}}$=%.1f' % (0.1 * ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel(r'$r$(t)')
plt.legend(loc=[1.01, -0.06], fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Interactive Demo: dynamics as a function of the initial valueLet's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
def plot_single_diffEinit(r_init):
pars['r_init'] = r_init
r = simulate_single(pars)
plt.figure()
plt.plot(pars['range_t'], r, 'b', zorder=1)
plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)
plt.xlabel('t (ms)', fontsize=16)
plt.ylabel(r'$r(t)$', fontsize=16)
plt.ylim(0, 1.0)
plt.show()
_ = widgets.interact(plot_single_diffEinit, r_init=(0, 1, 0.02))
###Output
_____no_output_____
###Markdown
Stability analysis via linearization of the dynamicsJust like Equation $1$ in the case ($w=0$) discussed above, a generic linear system $$\frac{dx}{dt} = \lambda (x - b),$$ has a fixed point for $x=b$. The analytical solution of such a system can be found to be:$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$ Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as: $$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, "**unstable**" . Compute the stability of Equation $1$Similar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\epsilon$, i.e. $r = r^{*} + \epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:\begin{align}\tau \frac{d\epsilon}{dt} \approx -\epsilon + w F'(w\cdot r^{*} + I_{\text{ext}};a,\theta) \epsilon \end{align}where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:\begin{align}\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau }[-1 + w F'(w\cdot r^* + I_{\text{ext}};a,\theta)] \end{align}That is, as in the linear system above, the value of$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau \qquad (4)$$determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system. Exercise 3: Compute $dF$ and EigenvalueThe derivative of the sigmoid transfer function is:\begin{align} \frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \qquad (5)\end{align}Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
###Code
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
#################################################
# TODO for students: compute dFdx ##
raise NotImplementedError("Student excercise: compute the deravitive of F")
#################################################
# Calculate the population activation
dFdx = ...
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# df = dF(x, pars['a'], pars['theta'])
# plot_dFdt(x, df)
# to_remove solution
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
# Calculate the population activation
dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
df = dF(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_dFdt(x, df)
###Output
_____no_output_____
###Markdown
Exercise 4: Compute eigenvaluesAs discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?Note that the expression of the eigenvalue at fixed point $r^*$$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau$$
###Code
def eig_single(pars, fp):
"""
Args:
pars : Parameter dictionary
fp : fixed point r_fp
Returns:
eig : eigevalue of the linearized system
"""
# get the parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w, I_ext = pars['w'], pars['I_ext']
print(tau, a, theta, w, I_ext)
#################################################
## TODO for students: compute eigenvalue ##
raise NotImplementedError("Student excercise: compute the eigenvalue")
#################################################
# Compute the eigenvalue
eig = ...
return eig
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
# Uncomment below lines after completing the eig_single function.
# Find the eigenvalues for all fixed points of Exercise 2
# x_fp_1 = my_fp_single(pars, 0.).item()
# eig_fp_1 = eig_single(pars, x_fp_1).item()
# print(f'Fixed point1 at {x_fp_1:.3f} with Eigenvalue={eig_fp_1:.3f}')
# x_fp_2 = my_fp_single(pars, 0.4).item()
# eig_fp_2 = eig_single(pars, x_fp_2).item()
# print(f'Fixed point2 at {x_fp_2:.3f} with Eigenvalue={eig_fp_2:.3f}')
# x_fp_3 = my_fp_single(pars, 0.9).item()
# eig_fp_3 = eig_single(pars, x_fp_3).item()
# print(f'Fixed point3 at {x_fp_3:.3f} with Eigenvalue={eig_fp_3:.3f}')
###Output
_____no_output_____
###Markdown
**SAMPLE OUTPUT**```Fixed point1 at 0.042 with Eigenvalue=-0.583Fixed point2 at 0.447 with Eigenvalue=0.498Fixed point3 at 0.900 with Eigenvalue=-0.626```
###Code
# to_remove solution
def eig_single(pars, fp):
"""
Args:
pars : Parameter dictionary
fp : fixed point r_fp
Returns:
eig : eigevalue of the linearized system
"""
# get the parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w, I_ext = pars['w'], pars['I_ext']
print(tau, a, theta, w, I_ext)
# Compute the eigenvalue
eig = (-1. + w * dF(w * fp + I_ext, a, theta)) / tau
return eig
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
# Find the eigenvalues for all fixed points of Exercise 2
x_fp_1 = my_fp_single(pars, 0.).item()
eig_fp_1 = eig_single(pars, x_fp_1).item()
print(f'Fixed point1 at {x_fp_1:.3f} with Eigenvalue={eig_fp_1:.3f}')
x_fp_2 = my_fp_single(pars, 0.4).item()
eig_fp_2 = eig_single(pars, x_fp_2).item()
print(f'Fixed point2 at {x_fp_2:.3f} with Eigenvalue={eig_fp_2:.3f}')
x_fp_3 = my_fp_single(pars, 0.9).item()
eig_fp_3 = eig_single(pars, x_fp_3).item()
print(f'Fixed point3 at {x_fp_3:.3f} with Eigenvalue={eig_fp_3:.3f}')
###Output
1.0 1.2 2.8 5.0 0.5
Fixed point1 at 0.042 with Eigenvalue=-0.583
1.0 1.2 2.8 5.0 0.5
Fixed point2 at 0.447 with Eigenvalue=0.498
1.0 1.2 2.8 5.0 0.5
Fixed point3 at 0.900 with Eigenvalue=-0.626
###Markdown
Think! Throughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$? --- Bonus 2: Noisy input drives the transition between two stable states Ornstein-Uhlenbeck (OU) processAs discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows: $$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
###Code
# @title OU process `my_OU(pars, sig, myseed=False)`
# @markdown Make sure you execute this cell to visualize the noise!
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt - 1):
I_ou[it + 1] = (I_ou[it]
+ dt / tau_ou * (0. - I_ou[it])
+ np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1])
return I_ou
pars = default_pars_single(T=100)
pars['tau_ou'] = 1. # [ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=2020)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'r')
plt.xlabel('t (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$')
plt.show()
###Output
_____no_output_____
###Markdown
Example: Up-Down transitionIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
###Code
# @title Simulation of an E population with OU inputs
# @markdown Make sure you execute this cell to spot the Up-Down states!
pars = default_pars_single(T=1000)
pars['w'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. # [ms]
pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
r = simulate_single(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], r, 'b', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel(r'$r(t)$')
plt.show()
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 3, Day 2, Tutorial 1 Neuronal Network Dynamics: Neural Rate Models BackgroundThe brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is a very large network of densely interconnected neurons. The activity of neurons is constantly evolving in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of views include information processing, network science, and statistical models). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study neuronal dynamics if we want to understand the brain.In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time. ObjectivesIn this tutorial we will learn how to build a firing rate model of a single population of excitatory neurons. Steps:- Write the equation for the firing rate dynamics of a 1D excitatory population.- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. - Investigate the stability of the fixed points by linearizing the dynamics around them. Setup
###Code
# Imports
import matplotlib.pyplot as plt # import matplotlib
import numpy as np # import numpy
import scipy.optimize as opt # import root-finding algorithm
import ipywidgets as widgets # interactive display
#@title Figure Settings
%matplotlib inline
fig_w, fig_h = 6, 4
my_fontsize = 16
my_params = {'axes.labelsize': my_fontsize,
'axes.titlesize': my_fontsize,
'figure.figsize': [fig_w, fig_h],
'font.size': my_fontsize,
'legend.fontsize': my_fontsize-4,
'lines.markersize': 8.,
'lines.linewidth': 2.,
'xtick.labelsize': my_fontsize-2,
'ytick.labelsize': my_fontsize-2}
plt.rcParams.update(my_params)
# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6,4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14.)
plt.ylabel('F(x)', fontsize=14.)
plt.show()
#@title Helper functions
def plot_dE_E(E, dEdt):
plt.figure()
plt.plot(E_grid, dEdt, 'k')
plt.plot(E_grid, 0.*E_grid, 'k--')
plt.xlabel('E activity')
plt.ylabel(r'$\frac{dE}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x,dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14.)
plt.ylabel('dF(x)', fontsize=14.)
plt.show()
###Output
_____no_output_____
###Markdown
Neuronal network dynamics
###Code
#@title Video: Dynamic networks
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="ZSsAaeaG9ZM", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=ZSsAaeaG9ZM
###Markdown
Dynamics of a single excitatory populationIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of different network parameters.\begin{align}\tau_E \frac{dE}{dt} &= -E + F(w_{EE}E + I^{\text{ext}}_E) \quad\qquad (1)\end{align}$E(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau_E$ controls the timescale of the evolution of the average firing rate, $w_{EE}$ denotes the strength (synaptic weight) of the recurrent excitatory input to the population, $I^{\text{ext}}_E$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.To start building the model, please execute the cell below to initialize the simulation parameters.
###Code
#@title Default parameters for a single excitatory population model
def default_parsE( **kwargs):
pars = {}
### Excitatory parameters ###
pars['tau_E'] = 1. # Timescale of the E population [ms]
pars['a_E'] = 1.2 # Gain of the E population
pars['theta_E'] = 2.8 # Threshold of the E population
### Connection strength ###
pars['wEE'] = 0. # E to E, we first set it to 0
### External input ###
pars['I_ext_E'] = 0.
### simulation parameters ###
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['E_init'] = 0.2 # Initial value of E
### External parameters if any ###
for k in kwargs:
pars[k] = kwargs[k]
pars['range_t'] = np.arange(0, pars['T'], pars['dt']) # Vector of discretized time points [ms]
return pars
###Output
_____no_output_____
###Markdown
You can use:- `pars = default_parsE()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_parsE(T=T_sim, dt=time_step)` to set new simulation time and time step- After `pars = default_parsE()`, use `pars['New_para'] = value` to add an new parameter with its value F-I curvesIn electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.The transfer function $F(\cdot)$ in Equation (1) represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$. Exercise 1: Implement F-I curve Let's first investigate the activation functions before simulating the dynamics of the entire population. In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
###Code
# Excercise 1
def F(x,a,theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################################################
## TODO for students: compute f = F(x), remove the NotImplementedError once done#
#################################################################################
# the exponential function: np.exp(.)
# f = ...
raise NotImplementedError("Student excercise: implement the f-I function")
return f
# Uncomment these lines when you've filled the function, then run the cell again
# to plot the f-I curve.
pars = default_parsE() # get default parameters
# print(pars) # print out pars to get familiar with parameters
x = np.arange(0,10,.1) # set the range of input
# Uncomment this when you fill the exercise, and call the function
# plot_fI(x, F(x,pars['a_E'],pars['theta_E']))
# to_remove solution
def F(x,a,theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
the population activation response F(x) for input x
"""
# add the expression of f = F(x)
f = (1+np.exp(-a*(x-theta)))**-1 - (1+np.exp(a*theta))**-1
return f
pars = default_parsE() # get default parameters
x = np.arange(0,10,.1) # set the range of input
with plt.xkcd():
plot_fI(x, F(x,pars['a_E'],pars['theta_E']))
###Output
findfont: Font family ['xkcd', 'xkcd Script', 'Humor Sans', 'Comic Neue', 'Comic Sans MS'] not found. Falling back to DejaVu Sans.
###Markdown
Interactive Demo: Parameter exploration of F-I curveHere's an interactive demo that shows how the F-I curve is changing for different values of the gain and threshold parameters.**Remember to enable the demo by running the cell.**
###Code
#@title F-I curve Explorer
def interactive_plot_FI(a, theta):
'''
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
'''
# set the range of input
x = np.arange(0,10,.1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14.)
plt.ylabel('F(x)', fontsize=14.)
plt.show()
_ = widgets.interact(interactive_plot_FI, a = (0.3, 3., 0.3), \
theta = (2., 4., 0.2))
###Output
_____no_output_____
###Markdown
Simulation scheme of E dynamicsBecause $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation (1) can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:\begin{align}&\frac{dE}{dt} \approx \frac{E[k+1]-E[k]}{\Delta t} \end{align}where $E[k] = E(k\Delta t)$. Thus,$$\Delta E[k] = \frac{\Delta t}{\tau_E}[-E[k] + F(w_{EE}E[k] + I^{\text{ext}}_E(k;a_E,\theta_E)]$$Hence, Equation (1) is updated at each time step by:$$E[k+1] = E[k] + \Delta E[k]$$**_Please execute the following cell to enable the WC simulator_**
###Code
#@title E population simulator: `simulate_E`
def simulate_E(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
E : Activity of excitatory population (array)
"""
# Set parameters
tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E']
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
E_init = pars['E_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
E = np.zeros(Lt)
E[0] = E_init
I_ext_E = I_ext_E*np.ones(Lt)
# Update the E activity
for k in range(Lt-1):
dE = dt/tau_E * (-E[k] + F(wEE*E[k]+I_ext_E[k], a_E, theta_E))
E[k+1] = E[k] + dE
return E
print(help(simulate_E))
###Output
Help on function simulate_E in module __main__:
simulate_E(pars)
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
E : Activity of excitatory population (array)
None
###Markdown
Interactive Demo: Parameter Exploration of single population dynamicsNote that $w_{EE}=0$, as in the default setting, means no recurrent input to the excitatory population in Equation (1). Hence, the dynamics is entirely determined by the external input $I_{E}^{\text{ext}}$. Try to explore how $E_{sim}(t)$ changes with different $I_{E}^{\text{ext}}$ and $\tau_E$ parameter values, and investigate the relationship between $F(I_{E}^{\text{ext}}; a_E, \theta_E)$ and the steady value of E. Note that, $E_{ana}(t)$ denotes the analytical solution.
###Code
#@title Mean-field model Explorer
# get default parameters
pars = default_parsE(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau_E):
# set external input and time constant
pars['I_ext_E'] = I_ext
pars['tau_E'] = tau_E
# simulation
E = simulate_E(pars)
# Analytical Solution
E_ana = pars['E_init'] + (F(I_ext,pars['a_E'],pars['theta_E'])-pars['E_init'])*\
(1.-np.exp(-pars['range_t']/pars['tau_E']))
# plot
plt.figure()
plt.plot(pars['range_t'], E, 'b', label=r'$E_{\mathrm{sim}}$(t)', alpha=0.5, zorder=1)
plt.plot(pars['range_t'], E_ana, 'b--', lw=5, dashes=(2,2),\
label=r'$E_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'], F(I_ext,pars['a_E'],pars['theta_E'])\
*np.ones(pars['range_t'].size), 'k--', label=r'$F(I_E^{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('E activity', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext = (0.0, 10., 1.),\
tau_E = (1., 5., 0.2))
###Output
_____no_output_____
###Markdown
Think!Above, we have numerically solved a system driven by a positive input and that, if $w_{EE} \neq 0$, receives an excitatory recurrent input (**try changing the value of $w_{EE}$ to a positive number**). Yet, $E(t)$ either decays to zero or reaches a fixed non-zero value.- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that E(t) stays finite? - Which parameter would you change in order to increase the maximum value of the response? Fixed points of the E system
###Code
#@title Video: Fixed point
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="B31fX6V0PZ4", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=B31fX6V0PZ4
###Markdown
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($E$) is zero, i.e. $\frac{dE}{dt}=0$. We can find that the steady state of the Equation $1$ by setting $\displaystyle{\frac{dE}{dt}=0}$ and solve for $E$:$$E_{\text{steady}} = F(w_{EE}E_{\text{steady}} + I^{\text{ext}}_E;a_E,\theta_E) = 0, \qquad (3)$$When it exists, the solution of Equation $3$ defines a **fixed point** of the dynamics which satisfies $\displaystyle{\frac{dE}{dt}=0}$ (and determines steady state of the system). Notice that the right-hand side of the last equation depends itself on $E_{steady}$. If $F(x)$ is nonlinear it is not always possible to find an analytical solution that can instead be found via numerical simulations, as we will do later.From the Interactive Demo one could also notice that the value of $\tau_E$ influences how quickly the activity will converge to the steady state from its initial value. In the specific case of $w_{EE}=0$, we can also analytically compute the analytical solution of Equation $1$ (i.e., the thick blue dashed line) and deduce the role of $\tau_E$ in determining the convergence to the fixed point: $$\displaystyle{E(t) = \big{[}F(I^{\text{ext}}_E;a_E,\theta_E) -E(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau_E}})} + E(t=0)$$ \\We can now numerically calculate the fixed point with the `scipy.optimize.root` function._(note that at the very beginning, we `import scipy.optimize as opt` )_.\\Please execute the cell below to define the functions `my_fpE`, `check_fpE`, and `plot_fpE`
###Code
#@title Function of calculating the fixed point
def my_fpE(pars, E_init):
# get the parameters
a_E, theta_E = pars['a_E'], pars['theta_E']
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
# define the right hand of E dynamics
def my_WCr(x):
E = x[0]
dEdt=(-E + F(wEE*E+I_ext_E,a_E,theta_E))
y = np.array(dEdt)
return y
x0 = np.array(E_init)
x_fp = opt.root(my_WCr, x0).x
return x_fp
def check_fpE(pars, x_fp):
a_E, theta_E = pars['a_E'], pars['theta_E']
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
# calculate Equation(3)
y = x_fp- F(wEE*x_fp+I_ext_E, a_E, theta_E)
return np.abs(y)<1e-4
def plot_fpE(pars, x_fp, mycolor):
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
plt.plot(wEE*x_fp+I_ext_E, x_fp, 'o', color=mycolor)
###Output
_____no_output_____
###Markdown
Exercise 2: Visualization of the fixed pointWhen no analytical solution of Equation $3$ can be found, it is often useful to plot $\displaystyle{\frac{dE}{dt}=0}$ as a function of $E$. The values of E for which the plotted function crosses zero on the y axis correspond to fixed points. Here, let us, for example, set $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$. Define $\displaystyle{\frac{dE}{dt}}$ using Equation $1$, plot the result, and check for the presence of fixed points. We will now try to find the fixed points using the previously defined function `my_fpE(pars, E_init)` with different initial values ($E_{\text{init}}$). Use the previously defined function `check_fpE(pars, x_fp)` to verify that the values of $E$ for which $\displaystyle{\frac{dE}{dt}} = 0$ are the true fixed points.
###Code
# Exercise 2
pars = default_parsE() # get default parameters
# set your external input and wEE
pars['I_ext_E'] = 0.5
pars['wEE'] = 5.0
E_grid = np.linspace(0, 1., 1000)# give E_grid
#figure, line (E, dEdt)
###############################
## TODO for students: #
## Calculate dEdt = -E + F(.) #
## Then plot the lines #
###############################
# Calculate dEdt
# dEdt = ...
# Uncomment this to plot the dEdt across E
# plot_dE_E(E_grid, dEdt)
# Add fixed point
#####################################################
## TODO for students: #
# Calculate the fixed point with your initial value #
# verify your fixed point and plot the corret ones #
#####################################################
# Calculate the fixed point with your initial value
x_fp_1 = my_fpE(pars, 1)
#check if x_fp is the intersection of the lines with the given function check_fpE(pars, x_fp)
#vary different initial values to find the correct fixed point (Should be 3)
# Use blue, red and yellow colors, respectively ('b', 'r', 'y' codenames)
# if check_fpE(pars, x_fp_1):
# plt.plot(x_fp_1, 0, 'bo', ms=8)
# Replicate the code above (lines 35-36) for all fixed points.
# to_remove solution
pars = default_parsE() # get default parameters
#set your external input and wEE
pars['I_ext_E'] = 0.5
pars['wEE'] = 5.0
# give E_grid
E_grid = np.linspace(0, 1., 1000)
# Calculate dEdt
dEdt = -E_grid + F(pars['wEE']*E_grid+pars['I_ext_E'], pars['a_E'], pars['theta_E'])
with plt.xkcd():
plot_dE_E(E_grid, dEdt)
#Calculate the fixed point with your initial value
x_fp_1 = my_fpE(pars, 0.)
if check_fpE(pars, x_fp_1):
plt.plot(x_fp_1, 0, 'bo', ms=8)
x_fp_2 = my_fpE(pars, 0.4)
if check_fpE(pars, x_fp_2):
plt.plot(x_fp_2, 0, 'ro', ms=8)
x_fp_3 = my_fpE(pars, 0.9)
if check_fpE(pars, x_fp_3):
plt.plot(x_fp_3, 0, 'yo', ms=8)
plt.show()
###Output
findfont: Font family ['xkcd', 'xkcd Script', 'Humor Sans', 'Comic Neue', 'Comic Sans MS'] not found. Falling back to DejaVu Sans.
###Markdown
Interactive Demo: fixed points as a function of recurrent and external inputs.You can now explore how the previous plot changes when the recurrent coupling $w_{\text{EE}}$ and the external input $I_E^{\text{ext}}$ take different values.
###Code
#@title Fixed point Explorer
def plot_intersection_E(wEE, I_ext_E):
#set your parameters
pars['wEE'] = wEE
pars['I_ext_E'] = I_ext_E
#note that wEE !=0
if wEE>0:
# find fixed point
x_fp_1 = my_fpE(pars, 0.)
x_fp_2 = my_fpE(pars, 0.4)
x_fp_3 = my_fpE(pars, 0.9)
plt.figure()
E_grid = np.linspace(0, 1., 1000)
dEdt = -E_grid + F(wEE*E_grid+I_ext_E, pars['a_E'], pars['theta_E'])
plt.plot(E_grid, dEdt, 'k')
plt.plot(E_grid, 0.*E_grid, 'k--')
if check_fpE(pars, x_fp_1):
plt.plot(x_fp_1, 0, 'bo', ms=8)
if check_fpE(pars, x_fp_2):
plt.plot(x_fp_2, 0, 'bo', ms=8)
if check_fpE(pars, x_fp_3):
plt.plot(x_fp_3, 0, 'bo', ms=8)
plt.xlabel('E activity', fontsize=14.)
plt.ylabel(r'$\frac{dE}{dt}$', fontsize=18.)
plt.show()
_ = widgets.interact(plot_intersection_E, wEE = (1., 7., 0.2), \
I_ext_E = (0., 3., 0.1))
###Output
_____no_output_____
###Markdown
SummaryIn this tutorial, we have investigated the dynamics of a rate-based single excitatory population of neurons.We learned about:- The effect of the input parameters and the time constant of the network on the dynamics of the population.- How to find the fixed point(s) of the system.Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:- How to determine the stability of a fixed point by linearizing the system.- How to add realistic inputs to our model. Bonus 1: Stability of a fixed point
###Code
#@title Video: Stability of fixed points
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nvxxf59w2EA", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=nvxxf59w2EA
###Markdown
Initial values and trajectoriesHere, let us first set $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$, and investigate the dynamics of $E(t)$ starting with different initial values $E(0) \equiv E_{\text{init}}$. We will plot the trajectories of $E(t)$ with $E_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
###Code
#@title Initial values
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
plt.figure(figsize=(10,6))
for ie in range(10):
pars['E_init'] = 0.1*ie # set the initial value
E = simulate_E(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], E, 'b', alpha=0.1 + 0.1*ie, label= r'E$_{\mathrm{init}}$=%.1f' % (0.1*ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel('E(t)')
plt.legend(loc=[0.72, 0.13], fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Interactive Demo: dynamics as a function of the initial value.Let's now set $E_{init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
###Code
#@title Initial value Explorer
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
def plot_E_diffEinit(E_init):
pars['E_init'] = E_init
E = simulate_E(pars)
plt.figure()
plt.plot(pars['range_t'], E, 'b', label='E(t)')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('E activity', fontsize=16.)
plt.show()
_ = widgets.interact(plot_E_diffEinit, E_init = (0., 1., 0.02))
###Output
_____no_output_____
###Markdown
Stability analysis via linearization of the dynamicsJust like Equation $1$ in the case ($w_{EE}=0$) discussed above, a generic linear system $$\frac{dx}{dt} = \lambda (x - b),$$ has a fixed point for $x=b$. The analytical solution of such a system can be found to be:$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$ Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as: $$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially and the fixed point is, therefore, "**unstable**" . Compute the stability of Equation (1)Similar to what we did in the linear system above, in order to determine the stability of a fixed point $E_{\rm fp}$ of the excitatory population dynamics, we perturb Equation $1$ around $E_{\rm fp}$ by $\epsilon$, i.e. $E = E_{\rm fp} + \epsilon$. We can plug in Equation $1$ and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:\begin{align}\tau_E \frac{d\epsilon}{dt} \approx -\epsilon + w_{EE} F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E) \epsilon \end{align}where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:\begin{align}\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau_E }[-1 + w_{EE} F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E)] \end{align}That is, as in the linear system above, the value of $\lambda = [-1+ w_{EE}F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E)]/\tau_E$ determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system. Exercise 4: Compute $dF$ and EigenvalueThe derivative of the sigmoid transfer function is:\begin{align} \frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \end{align}Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
###Code
# Exercise 4
def dF(x,a,theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
#####################################################################
## TODO for students: compute dFdx, then remove NotImplementedError #
#####################################################################
# dFdx = ...
raise NotImplementedError("Student excercise: compute the deravitive of F(x)")
return dFdx
pars = default_parsE() # get default parameters
x = np.arange(0,10,.1) # set the range of input
# Uncomment below lines after completing the dF function
# plot_dFdt(x,dF(x,pars['a_E'],pars['theta_E']))
# to_remove solution
def dF(x,a,theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
dFdx = a*np.exp(-a*(x-theta))*(1+np.exp(-a*(x-theta)))**-2
return dFdx
# get default parameters
pars = default_parsE()
# set the range of input
x = np.arange(0,10,.1)
# plot figure
with plt.xkcd():
plot_dFdt(x,dF(x,pars['a_E'],pars['theta_E']))
###Output
_____no_output_____
###Markdown
Exercise 5: Compute eigenvalues As discussed above, for the case with $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$, the system displays **3** fixed points. However, when we simulated the dynamics and varied the initial conditions $E_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the $3$ fixed points by calculating the corresponding eigenvalues with the function `eig_E` defined above. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?
###Code
# Exercise 5
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
def eig_E(pars, fp):
"""
Args:
pars : Parameter dictionary
fp : fixed point E
Returns:
eig : eigevalue of the linearized system
"""
#get the parameters
tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E']
wEE, I_ext_E = pars['wEE'], pars['I_ext_E']
# fixed point
E = fp
#######################################################################
## TODO for students: compute eigenvalue, remove NotImplementedError #
#######################################################################
# eig = ...
raise NotImplementedError("Student excercise: compute the eigenvalue")
return eig
# Uncomment below lines after completing the eigE function.
# x_fp_1 = fpE(pars, 0.)
# eig_fp_1 = eig_E(pars, x_fp_1)
# print('Fixed point1=%.3f, Eigenvalue=%.3f' % (x_fp_1, eig_E1))
# Continue by finding the eigenvalues for all fixed points of Exercise 2
# to_remove solution
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
def eig_E(pars, fp):
"""
Args:
pars : Parameter dictionary
fp : fixed point E
Returns:
eig : eigevalue of the linearized system
"""
#get the parameters
tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E']
wEE, I_ext_E = pars['wEE'], pars['I_ext_E']
# fixed point
E = fp
eig = (-1. + wEE*dF(wEE*E + I_ext_E, a_E, theta_E)) / tau_E
return eig
# Uncomment below lines after completing the eigE function
x_fp_1 = my_fpE(pars, 0.)
eig_E1 = eig_E(pars, x_fp_1)
print('Fixed point1=%.3f, Eigenvalue=%.3f' % (x_fp_1, eig_E1))
# Continue by finding the eigenvalues for all fixed points of Exercise 2
x_fp_2 = my_fpE(pars, 0.4)
eig_E2 = eig_E(pars, x_fp_2)
print('Fixed point2=%.3f, Eigenvalue=%.3f' % (x_fp_2, eig_E2))
x_fp_3 = my_fpE(pars, 0.9)
eig_E3 = eig_E(pars, x_fp_3)
print('Fixed point3=%.3f, Eigenvalue=%.3f' % (x_fp_3, eig_E3))
###Output
Fixed point1=0.042, Eigenvalue=-0.583
Fixed point2=0.447, Eigenvalue=0.498
Fixed point3=0.900, Eigenvalue=-0.626
###Markdown
Think! Throughout the tutorial, we have assumed $w_{\rm EE}> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w_{\rm EE}> 0$ is replaced by $w_{\rm II}< 0$? Bonus 2: Noisy input drives transition between two stable states Ornstein-Uhlenbeck (OU) processAs discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows: $$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
###Code
#@title OU process `my_OU(pars, sig, myseed=False)`
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I = np.zeros(Lt)
I[0] = noise[0] * sig
#generate OU
for it in range(Lt-1):
I[it+1] = I[it] + dt/tau_ou*(0.-I[it]) + np.sqrt(2.*dt/tau_ou) * sig * noise[it+1]
return I
pars = default_parsE(T=100)
pars['tau_ou'] = 1. #[ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=1998)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'b')
plt.xlabel('Time (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$');
###Output
_____no_output_____
###Markdown
Bonus Example: Up-Down transitionIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
###Code
#@title Simulation of an E population with OU inputs
pars = default_parsE(T = 1000)
pars['wEE'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. #[ms]
pars['I_ext_E'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
E = simulate_E(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], E, 'r', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel('E activity')
plt.show()
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 3, Day 2, Tutorial 1 Neuronal Network Dynamics: Neural Rate Models__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom --- Tutorial ObjectivesThe brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks. The activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.In this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons. **Steps:**- Write the equation for the firing rate dynamics of a 1D excitatory population.- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. - Investigate the stability of the fixed points by linearizing the dynamics around them. --- Setup
###Code
# Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt # root-finding algorithm
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6, 4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
def plot_dr_r(r, drdt, x_fps=None):
plt.figure()
plt.plot(r, drdt, 'k')
plt.plot(r, 0. * r, 'k--')
if x_fps is not None:
plt.plot(x_fps, np.zeros_like(x_fps), "ko", ms=12)
plt.xlabel(r'$r$')
plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x, dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('dF(x)', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Neuronal network dynamics
###Code
# @title Video 1: Dynamic networks
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="p848349hPyw", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=p848349hPyw
###Markdown
Section 1.1: Dynamics of a single excitatory populationIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:\begin{align}\tau \frac{dr}{dt} &= -r + F(w\cdot r + I_{\text{ext}}) \quad\qquad (1)\end{align}$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\text{ext}}$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.To start building the model, please execute the cell below to initialize the simulation parameters.
###Code
# @markdown *Execute this cell to set default parameters for a single excitatory population model*
def default_pars_single(**kwargs):
pars = {}
# Excitatory parameters
pars['tau'] = 1. # Timescale of the E population [ms]
pars['a'] = 1.2 # Gain of the E population
pars['theta'] = 2.8 # Threshold of the E population
# Connection strength
pars['w'] = 0. # E to E, we first set it to 0
# External input
pars['I_ext'] = 0.
# simulation parameters
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['r_init'] = 0.2 # Initial value of E
# External parameters if any
pars.update(kwargs)
# Vector of discretized time points [ms]
pars['range_t'] = np.arange(0, pars['T'], pars['dt'])
return pars
###Output
_____no_output_____
###Markdown
You can now use:- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step- To update an existing parameter dictionary, use `pars['New_para'] = value`Because `pars` is a dictionary, it can be passed to a function that requires individual parameters as arguments using `my_func(**pars)` syntax. Section 1.2: F-I curvesIn electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.The transfer function $F(\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$. Exercise 1: Implement F-I curve Let's first investigate the activation functions before simulating the dynamics of the entire population. In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
###Code
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
# Define the sigmoidal transfer function f = F(x)
f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
f = F(x, pars['a'], pars['theta'])
plot_fI(x, f)
###Output
_____no_output_____
###Markdown
Interactive Demo: Parameter exploration of F-I curveHere's an interactive demo that shows how the F-I curve changes for different values of the gain and threshold parameters. How do the gain and threshold parameters affect the F-I curve?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def interactive_plot_FI(a, theta):
"""
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
"""
# set the range of input
x = np.arange(0, 10, .1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
_ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))
"""
Discussion:
For the function we have chosen to model the F-I curve (eq 2),
- a determines the slope (gain) of the rising phase of the F-I curve
- theta determines the input at which the function F(x) reaches its mid-value (0.5).
That is, theta shifts the F-I curve along the horizontal axis.
For our neurons we are using in this tutorial:
- a controls the gain of the neuron population
- theta controls the threshold at which the neuron population starts to respond
""";
###Output
_____no_output_____
###Markdown
Section 1.3: Simulation scheme of E dynamicsBecause $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:\begin{align}&\frac{dr}{dt} \approx \frac{r[k+1]-r[k]}{\Delta t} \end{align}where $r[k] = r(k\Delta t)$. Thus,$$\Delta r[k] = \frac{\Delta t}{\tau}[-r[k] + F(w\cdot r[k] + I_{\text{ext}}(k;a,\theta))]$$Hence, Equation (1) is updated at each time step by:$$r[k+1] = r[k] + \Delta r[k]$$
###Code
# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`*
def simulate_single(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
"""
# Set parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
r_init = pars['r_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
r = np.zeros(Lt)
r[0] = r_init
I_ext = I_ext * np.ones(Lt)
# Update the E activity
for k in range(Lt - 1):
dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta))
r[k+1] = r[k] + dr
return r
help(simulate_single)
###Output
Help on function simulate_single in module __main__:
simulate_single(pars)
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
###Markdown
Interactive Demo: Parameter Exploration of single population dynamicsNote that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\text{ext}}$. Explore these dynamics in this interactive demo.How does $r_{\text{sim}}(t)$ change with different $I_{\text{ext}}$ values? How does it change with different $\tau$ values? Investigate the relationship between $F(I_{\text{ext}}; a, \theta)$ and the steady value of $r(t)$. Note that, $r_{\rm ana}(t)$ denotes the analytical solution - you will learn how this is computed in the next section.
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
# get default parameters
pars = default_pars_single(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau):
# set external input and time constant
pars['I_ext'] = I_ext
pars['tau'] = tau
# simulation
r = simulate_single(pars)
# Analytical Solution
r_ana = (pars['r_init']
+ (F(I_ext, pars['a'], pars['theta'])
- pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))
# plot
plt.figure()
plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5,
zorder=1)
plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),
label=r'$r_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'],
F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),
'k--', label=r'$F(I_{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('Activity r(t)', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),
tau=(1., 5., 0.2))
# exponential increase/decay in activity with time-scale determined by time constant tau
"""
Discussion:
Given the choice of F-I curve (eq 2) and dynamics of the neuron population (eq. 1)
the neurons have two fixed points or steady-state responses irrespective of the input.
- Weak inputs to the neurons eventually result in the activity converging to zero
- Strong inputs to the neurons eventually result in the activity converging to max value
The time constant tau, does not affect the steady-state response but it determines
the time the neurons take to reach to their fixed point.
""";
###Output
_____no_output_____
###Markdown
Think!Above, we have numerically solved a system driven by a positive input and that, if $w_{EE} \neq 0$, receives an excitatory recurrent input (**extra challenge: try changing the value of $w_{EE}$ to a positive number and plotting the results of simulate_single**). Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that $r_E$(t) stays finite? - Which parameter would you change in order to increase the maximum value of the response?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
# get default parameters
pars = default_pars_single(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau, w):
# set external input and time constant
pars['I_ext'] = I_ext
pars['tau'] = tau
pars['w'] = w
# simulation
r = simulate_single(pars)
# Analytical Solution
r_ana = (pars['r_init']
+ (F(I_ext, pars['a'], pars['theta'])
- pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))
# plot
plt.figure()
plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5,
zorder=1)
plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),
label=r'$r_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'],
F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),
'k--', label=r'$F(I_{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('Activity r(t)', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),
tau=(1., 5., 0.2), w=(0.0, 1, 0.05))
"""
Discussion:
1) As the F-I curve is bounded between zero and one, the system doesn't explode.
The f-curve guarantees this property
2) One way to increase the maximum response is to change the f-I curve. For
example, the ReLU is an unbounded function, and thus will increase the overall maximal
response of the network.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Fixed points of the single population system
###Code
# @title Video 2: Fixed point
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Ox3ELd1UFyo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=Ox3ELd1UFyo
###Markdown
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\displaystyle \frac{dr}{dt}=0$. We can find that the steady state of the Equation. (1) by setting $\displaystyle{\frac{dr}{dt}=0}$ and solve for $r$:$$-r_{\text{steady}} + F(w\cdot r_{\text{steady}} + I_{\text{ext}};a,\theta) = 0, \qquad (3)$$When it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system in Equation (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.From the Interactive Demo, one could also notice that the value of $\tau$ influences how quickly the activity will converge to the steady state from its initial value. In the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\tau$ in determining the convergence to the fixed point: $$\displaystyle{r(t) = \big{[}F(I_{\text{ext}};a,\theta) -r(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau}})} + r(t=0)$$ \\We can now numerically calculate the fixed point with a root finding algorithm. Exercise 2: Visualization of the fixed pointsWhen it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\displaystyle{\frac{dr}{dt}}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points. Here, let us, for example, set $w=5.0$ and $I^{\text{ext}}=0.5$. From Equation (1), you can obtain$$\frac{dr}{dt} = [-r + F(w\cdot r + I^{\text{ext}})]\,/\,\tau $$Then, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points.
###Code
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
# Calculate drdt
drdt = (-r + F(w * r + I_ext, a, theta)) / tau
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
plot_dr_r(r, drdt)
###Output
_____no_output_____
###Markdown
Exercise 3: Fixed point calculationWe will now find the fixed points numerically. To do so, we need to specif initial values ($r_{\text{guess}}$) for the root-finding algorithm to start from. From the line $\displaystyle{\frac{dr}{dt}}$ plotted above in Exercise 2, initial values can be chosen as a set of values close to where the line crosses zero on the y axis (real fixed point).The next cell defines three helper functions that we will use:- `my_fp_single(r_guess, **pars)` uses a root-finding algorithm to locate a fixed point near a given initial value- `check_fp_single(x_fp, **pars)`, verifies that the values of $r_{\rm fp}$ for which $\displaystyle{\frac{dr}{dt}} = 0$ are the true fixed points- `my_fp_finder(r_guess_vector, **pars)` accepts an array of initial values and finds the same number of fixed points, using the above two functions
###Code
# @markdown *Execute this cell to enable the fixed point functions*
def my_fp_single(r_guess, a, theta, w, I_ext, **other_pars):
"""
Calculate the fixed point through drE/dt=0
Args:
r_guess : Initial value used for scipy.optimize function
a, theta, w, I_ext : simulation parameters
Returns:
x_fp : value of fixed point
"""
# define the right hand of E dynamics
def my_WCr(x):
r = x
drdt = (-r + F(w * r + I_ext, a, theta))
y = np.array(drdt)
return y
x0 = np.array(r_guess)
x_fp = opt.root(my_WCr, x0).x.item()
return x_fp
def check_fp_single(x_fp, a, theta, w, I_ext, mytol=1e-4, **other_pars):
"""
Verify |dr/dt| < mytol
Args:
fp : value of fixed point
a, theta, w, I_ext: simulation parameters
mytol : tolerance, default as 10^{-4}
Returns :
Whether it is a correct fixed point: True/False
"""
# calculate Equation(3)
y = x_fp - F(w * x_fp + I_ext, a, theta)
# Here we set tolerance as 10^{-4}
return np.abs(y) < mytol
def my_fp_finder(pars, r_guess_vector, mytol=1e-4):
"""
Calculate the fixed point(s) through drE/dt=0
Args:
pars : Parameter dictionary
r_guess_vector : Initial values used for scipy.optimize function
mytol : tolerance for checking fixed point, default as 10^{-4}
Returns:
x_fps : values of fixed points
"""
x_fps = []
correct_fps = []
for r_guess in r_guess_vector:
x_fp = my_fp_single(r_guess, **pars)
if check_fp_single(x_fp, **pars, mytol=mytol):
x_fps.append(x_fp)
return x_fps
help(my_fp_finder)
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
r_guess_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_guess_vector)
print('%.3f '*3 % tuple(x_fps))
plot_dr_r(r, drdt, x_fps)
###Output
0.042 0.447 0.900
###Markdown
Interactive Demo: fixed points as a function of recurrent and external inputs.You can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\text{ext}}$ take different values. How does the number of fixed points change?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def plot_intersection_single(w, I_ext):
# set your parameters
pars = default_pars_single(w=w, I_ext=I_ext)
# find fixed points
r_init_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_init_vector)
print('%.3f '*len(x_fps) % tuple(x_fps))
# plot
r = np.linspace(0, 1., 1000)
drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau']
plot_dr_r(r, drdt, x_fps)
_ = widgets.interact(plot_intersection_single, w=(0, 7, 0.2),
I_ext=(0, 3, 0.1))
"""
Discussion:
The fixed points of the single excitatory neuron population are determined by both
recurrent connections w and external input I_ext. In a previous interactive demo
we saw how the system showed two different steady-states when w = 0. But when w
doe not equal 0, for some range of w the system shows three fixed points (the middle
one being unstable) and the steady state depends on the initial conditions (i.e.
r at time zero.).
More on this will be explained in the next section.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this tutorial, we have investigated the dynamics of a rate-based single population of neurons.We learned about:- The effect of the input parameters and the time constant of the network on the dynamics of the population.- How to find the fixed point(s) of the system.Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:- How to determine the stability of a fixed point by linearizing the system.- How to add realistic inputs to our model. --- Bonus 1: Stability of a fixed point
###Code
# @title Video 3: Stability of fixed points
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="KKMlWWU83Jg", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=KKMlWWU83Jg
###Markdown
Initial values and trajectoriesHere, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
###Code
# @markdown Execute this cell to see the trajectories!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
plt.figure(figsize=(8, 5))
for ie in range(10):
pars['r_init'] = 0.1 * ie # set the initial value
r = simulate_single(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie,
label=r'r$_{\mathrm{init}}$=%.1f' % (0.1 * ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel(r'$r$(t)')
plt.legend(loc=[1.01, -0.06], fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Interactive Demo: dynamics as a function of the initial valueLet's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
pars = default_pars_single(w=5.0, I_ext=0.5)
def plot_single_diffEinit(r_init):
pars['r_init'] = r_init
r = simulate_single(pars)
plt.figure()
plt.plot(pars['range_t'], r, 'b', zorder=1)
plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)
plt.xlabel('t (ms)', fontsize=16)
plt.ylabel(r'$r(t)$', fontsize=16)
plt.ylim(0, 1.0)
plt.show()
_ = widgets.interact(plot_single_diffEinit, r_init=(0.1, 1, 0.02))
"""
Discussion:
To better appreciate what is happening here, you should go back to the previous
interactive demo. Set the w = 5 and I_ext = 0.5.
You will find that there are three fixed points of the system for these values of
w and I_ext. Now, choose the initial value in this demo and see in which direction
the system output moves. When r_init is in the vicinity of the leftmost fixed points
it moves towards the left most fixed point. When r_init is in the vicinity of the
rightmost fixed points it moves towards the rightmost fixed point.
""";
###Output
_____no_output_____
###Markdown
Stability analysis via linearization of the dynamicsJust like Equation $1$ in the case ($w=0$) discussed above, a generic linear system $$\frac{dx}{dt} = \lambda (x - b),$$ has a fixed point for $x=b$. The analytical solution of such a system can be found to be:$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$ Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as: $$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, "**unstable**" . Compute the stability of Equation $1$Similar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\epsilon$, i.e. $r = r^{*} + \epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:\begin{align}\tau \frac{d\epsilon}{dt} \approx -\epsilon + w F'(w\cdot r^{*} + I_{\text{ext}};a,\theta) \epsilon \end{align}where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:\begin{align}\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau }[-1 + w F'(w\cdot r^* + I_{\text{ext}};a,\theta)] \end{align}That is, as in the linear system above, the value of$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau \qquad (4)$$determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system. Exercise 4: Compute $dF$The derivative of the sigmoid transfer function is:\begin{align} \frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \qquad (5)\end{align}Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
###Code
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
# Calculate the population activation
dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
df = dF(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_dFdt(x, df)
###Output
_____no_output_____
###Markdown
Exercise 5: Compute eigenvaluesAs discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?Note that the expression of the eigenvalue at fixed point $r^*$$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau$$ **SAMPLE OUTPUT**```Fixed point1 at 0.042 with Eigenvalue=-0.583Fixed point2 at 0.447 with Eigenvalue=0.498Fixed point3 at 0.900 with Eigenvalue=-0.626```
###Code
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
# Compute the eigenvalue
eig = (-1. + w * dF(w * fp + I_ext, a, theta)) / tau
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
for fp in x_fp:
eig_fp = eig_single(fp, **pars)
print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
###Output
Fixed point1 at 0.042 with Eigenvalue=-0.583
Fixed point1 at 0.447 with Eigenvalue=0.498
Fixed point1 at 0.900 with Eigenvalue=-0.626
###Markdown
Think! Throughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$?
###Code
"""
Discussion:
You can check this by going back the second last interactive demo and set the
weight to w<0. You will notice that the system has only one fixed point and that
is at zero value. For this particular dynamics, the system will eventually converge
to zero. But try it out.
""";
###Output
_____no_output_____
###Markdown
--- Bonus 2: Noisy input drives the transition between two stable states Ornstein-Uhlenbeck (OU) processAs discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows: $$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
###Code
# @title OU process `my_OU(pars, sig, myseed=False)`
# @markdown Make sure you execute this cell to visualize the noise!
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt - 1):
I_ou[it + 1] = (I_ou[it]
+ dt / tau_ou * (0. - I_ou[it])
+ np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1])
return I_ou
pars = default_pars_single(T=100)
pars['tau_ou'] = 1. # [ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=2020)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'r')
plt.xlabel('t (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$')
plt.show()
###Output
_____no_output_____
###Markdown
Example: Up-Down transitionIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
###Code
# @title Simulation of an E population with OU inputs
# @markdown Make sure you execute this cell to spot the Up-Down states!
pars = default_pars_single(T=1000)
pars['w'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. # [ms]
pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
r = simulate_single(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], r, 'b', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel(r'$r(t)$')
plt.show()
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 3, Day 2, Tutorial 1 Neuronal Network Dynamics: Neural Rate Models__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom --- Tutorial ObjectivesThe brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks. The activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.In this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons. **Steps:**- Write the equation for the firing rate dynamics of a 1D excitatory population.- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. - Investigate the stability of the fixed points by linearizing the dynamics around them. --- Setup
###Code
# Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt # root-finding algorithm
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6, 4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
def plot_dr_r(r, drdt, x_fps=None):
plt.figure()
plt.plot(r, drdt, 'k')
plt.plot(r, 0. * r, 'k--')
if x_fps is not None:
plt.plot(x_fps, np.zeros_like(x_fps), "ko", ms=12)
plt.xlabel(r'$r$')
plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x, dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('dF(x)', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Neuronal network dynamics
###Code
# @title Video 1: Dynamic networks
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="p848349hPyw", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Section 1.1: Dynamics of a single excitatory populationIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:\begin{align}\tau \frac{dr}{dt} &= -r + F(w\cdot r + I_{\text{ext}}) \quad\qquad (1)\end{align}$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\text{ext}}$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.To start building the model, please execute the cell below to initialize the simulation parameters.
###Code
# @markdown *Execute this cell to set default parameters for a single excitatory population model*
def default_pars_single(**kwargs):
pars = {}
# Excitatory parameters
pars['tau'] = 1. # Timescale of the E population [ms]
pars['a'] = 1.2 # Gain of the E population
pars['theta'] = 2.8 # Threshold of the E population
# Connection strength
pars['w'] = 0. # E to E, we first set it to 0
# External input
pars['I_ext'] = 0.
# simulation parameters
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['r_init'] = 0.2 # Initial value of E
# External parameters if any
pars.update(kwargs)
# Vector of discretized time points [ms]
pars['range_t'] = np.arange(0, pars['T'], pars['dt'])
return pars
###Output
_____no_output_____
###Markdown
You can now use:- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step- To update an existing parameter dictionary, use `pars['New_para'] = value`Because `pars` is a dictionary, it can be passed to a function that requires individual parameters as arguments using `my_func(**pars)` syntax. Section 1.2: F-I curvesIn electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.The transfer function $F(\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$. Exercise 1: Implement F-I curve Let's first investigate the activation functions before simulating the dynamics of the entire population. In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
###Code
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################
## TODO for students: compute f = F(x) ##
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the f-I function")
#################################################
# Define the sigmoidal transfer function f = F(x)
f = ...
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# f = F(x, pars['a'], pars['theta'])
# plot_fI(x, f)
# to_remove solution
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
# Define the sigmoidal transfer function f = F(x)
f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
f = F(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_fI(x, f)
###Output
_____no_output_____
###Markdown
Interactive Demo: Parameter exploration of F-I curveHere's an interactive demo that shows how the F-I curve changes for different values of the gain and threshold parameters. How do the gain and threshold parameters affect the F-I curve?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def interactive_plot_FI(a, theta):
"""
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
"""
# set the range of input
x = np.arange(0, 10, .1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
_ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))
# to_remove explanation
"""
Discussion:
For the function we have chosen to model the F-I curve (eq 2),
- a determines the slope (gain) of the rising phase of the F-I curve
- theta determines the input at which the function F(x) reaches its mid-value (0.5).
That is, theta shifts the F-I curve along the horizontal axis.
For our neurons we are using in this tutorial:
- a controls the gain of the neuron population
- theta controls the threshold at which the neuron population starts to respond
""";
###Output
_____no_output_____
###Markdown
Section 1.3: Simulation scheme of E dynamicsBecause $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:\begin{align}&\frac{dr}{dt} \approx \frac{r[k+1]-r[k]}{\Delta t} \end{align}where $r[k] = r(k\Delta t)$. Thus,$$\Delta r[k] = \frac{\Delta t}{\tau}[-r[k] + F(w\cdot r[k] + I_{\text{ext}}[k];a,\theta)]$$Hence, Equation (1) is updated at each time step by:$$r[k+1] = r[k] + \Delta r[k]$$
###Code
# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`*
def simulate_single(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
"""
# Set parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
r_init = pars['r_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
r = np.zeros(Lt)
r[0] = r_init
I_ext = I_ext * np.ones(Lt)
# Update the E activity
for k in range(Lt - 1):
dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta))
r[k+1] = r[k] + dr
return r
help(simulate_single)
###Output
_____no_output_____
###Markdown
Interactive Demo: Parameter Exploration of single population dynamicsNote that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\text{ext}}$. Explore these dynamics in this interactive demo.How does $r_{\text{sim}}(t)$ change with different $I_{\text{ext}}$ values? How does it change with different $\tau$ values? Investigate the relationship between $F(I_{\text{ext}}; a, \theta)$ and the steady value of $r(t)$. Note that, $r_{\rm ana}(t)$ denotes the analytical solution - you will learn how this is computed in the next section.
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
# get default parameters
pars = default_pars_single(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau):
# set external input and time constant
pars['I_ext'] = I_ext
pars['tau'] = tau
# simulation
r = simulate_single(pars)
# Analytical Solution
r_ana = (pars['r_init']
+ (F(I_ext, pars['a'], pars['theta'])
- pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))
# plot
plt.figure()
plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5,
zorder=1)
plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),
label=r'$r_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'],
F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),
'k--', label=r'$F(I_{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('Activity r(t)', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),
tau=(1., 5., 0.2))
# to_remove explanation
"""
Discussion:
Given the choice of F-I curve (eq 2) and dynamics of the neuron population (eq. 1)
the neurons have two fixed points or steady-state responses irrespective of the input.
- Weak inputs to the neurons eventually result in the activity converging to zero
- Strong inputs to the neurons eventually result in the activity converging to max value
The time constant tau, does not affect the steady-state response but it determines
the time the neurons take to reach to their fixed point.
""";
###Output
_____no_output_____
###Markdown
Think!Above, we have numerically solved a system driven by a positive input. Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that $r_E$(t) stays finite? - Which parameter would you change in order to increase the maximum value of the response?
###Code
# to_remove explanation
"""
Discussion:
1) As the F-I curve is bounded between zero and one, the system doesn't explode.
The f-curve guarantees this property
2) One way to increase the maximum response is to change the f-I curve. For
example, the ReLU is an unbounded function, and thus will increase the overall maximal
response of the network.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: Fixed points of the single population system
###Code
# @title Video 2: Fixed point
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Ox3ELd1UFyo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\displaystyle \frac{dr}{dt}=0$. We can find that the steady state of the Equation. (1) by setting $\displaystyle{\frac{dr}{dt}=0}$ and solve for $r$:$$-r_{\text{steady}} + F(w\cdot r_{\text{steady}} + I_{\text{ext}};a,\theta) = 0, \qquad (3)$$When it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system in Equation (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.From the Interactive Demo, one could also notice that the value of $\tau$ influences how quickly the activity will converge to the steady state from its initial value. In the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\tau$ in determining the convergence to the fixed point: $$\displaystyle{r(t) = \big{[}F(I_{\text{ext}};a,\theta) -r(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau}})} + r(t=0)$$ \\We can now numerically calculate the fixed point with a root finding algorithm. Exercise 2: Visualization of the fixed pointsWhen it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\displaystyle{\frac{dr}{dt}}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points. Here, let us, for example, set $w=5.0$ and $I^{\text{ext}}=0.5$. From Equation (1), you can obtain$$\frac{dr}{dt} = [-r + F(w\cdot r + I^{\text{ext}})]\,/\,\tau $$Then, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points.
###Code
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
#########################################################################
# TODO compute drdt and disable the error
raise NotImplementedError("Finish the compute_drdt function")
#########################################################################
# Calculate drdt
drdt = ...
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
# Uncomment to test your function
# drdt = compute_drdt(r, **pars)
# plot_dr_r(r, drdt)
# to_remove solution
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
# Calculate drdt
drdt = (-r + F(w * r + I_ext, a, theta)) / tau
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
with plt.xkcd():
plot_dr_r(r, drdt)
###Output
_____no_output_____
###Markdown
Exercise 3: Fixed point calculationWe will now find the fixed points numerically. To do so, we need to specif initial values ($r_{\text{guess}}$) for the root-finding algorithm to start from. From the line $\displaystyle{\frac{dr}{dt}}$ plotted above in Exercise 2, initial values can be chosen as a set of values close to where the line crosses zero on the y axis (real fixed point).The next cell defines three helper functions that we will use:- `my_fp_single(r_guess, **pars)` uses a root-finding algorithm to locate a fixed point near a given initial value- `check_fp_single(x_fp, **pars)`, verifies that the values of $r_{\rm fp}$ for which $\displaystyle{\frac{dr}{dt}} = 0$ are the true fixed points- `my_fp_finder(r_guess_vector, **pars)` accepts an array of initial values and finds the same number of fixed points, using the above two functions
###Code
# @markdown *Execute this cell to enable the fixed point functions*
def my_fp_single(r_guess, a, theta, w, I_ext, **other_pars):
"""
Calculate the fixed point through drE/dt=0
Args:
r_guess : Initial value used for scipy.optimize function
a, theta, w, I_ext : simulation parameters
Returns:
x_fp : value of fixed point
"""
# define the right hand of E dynamics
def my_WCr(x):
r = x
drdt = (-r + F(w * r + I_ext, a, theta))
y = np.array(drdt)
return y
x0 = np.array(r_guess)
x_fp = opt.root(my_WCr, x0).x.item()
return x_fp
def check_fp_single(x_fp, a, theta, w, I_ext, mytol=1e-4, **other_pars):
"""
Verify |dr/dt| < mytol
Args:
fp : value of fixed point
a, theta, w, I_ext: simulation parameters
mytol : tolerance, default as 10^{-4}
Returns :
Whether it is a correct fixed point: True/False
"""
# calculate Equation(3)
y = x_fp - F(w * x_fp + I_ext, a, theta)
# Here we set tolerance as 10^{-4}
return np.abs(y) < mytol
def my_fp_finder(pars, r_guess_vector, mytol=1e-4):
"""
Calculate the fixed point(s) through drE/dt=0
Args:
pars : Parameter dictionary
r_guess_vector : Initial values used for scipy.optimize function
mytol : tolerance for checking fixed point, default as 10^{-4}
Returns:
x_fps : values of fixed points
"""
x_fps = []
correct_fps = []
for r_guess in r_guess_vector:
x_fp = my_fp_single(r_guess, **pars)
if check_fp_single(x_fp, **pars, mytol=mytol):
x_fps.append(x_fp)
return x_fps
help(my_fp_finder)
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
#############################################################################
# TODO for students:
# Define initial values close to the intersections of drdt and y=0
# (How many initial values? Hint: How many times do the two lines intersect?)
# Calculate the fixed point with these initial values and plot them
#############################################################################
r_guess_vector = [...]
# Uncomment to test your values
# x_fps = my_fp_finder(pars, r_guess_vector)
# plot_dr_r(r, drdt, x_fps)
# to_remove solution
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
r_guess_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_guess_vector)
with plt.xkcd():
plot_dr_r(r, drdt, x_fps)
###Output
_____no_output_____
###Markdown
Interactive Demo: fixed points as a function of recurrent and external inputs.You can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\text{ext}}$ take different values. How does the number of fixed points change?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
def plot_intersection_single(w, I_ext):
# set your parameters
pars = default_pars_single(w=w, I_ext=I_ext)
# find fixed points
r_init_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_init_vector)
# plot
r = np.linspace(0, 1., 1000)
drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau']
plot_dr_r(r, drdt, x_fps)
_ = widgets.interact(plot_intersection_single, w=(1, 7, 0.2),
I_ext=(0, 3, 0.1))
# to_remove explanation
"""
Discussion:
The fixed points of the single excitatory neuron population are determined by both
recurrent connections w and external input I_ext. In a previous interactive demo
we saw how the system showed two different steady-states when w = 0. But when w
doe not equal 0, for some range of w the system shows three fixed points (the middle
one being unstable) and the steady state depends on the initial conditions (i.e.
r at time zero.).
More on this will be explained in the next section.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this tutorial, we have investigated the dynamics of a rate-based single population of neurons.We learned about:- The effect of the input parameters and the time constant of the network on the dynamics of the population.- How to find the fixed point(s) of the system.Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:- How to determine the stability of a fixed point by linearizing the system.- How to add realistic inputs to our model. --- Bonus 1: Stability of a fixed point
###Code
# @title Video 3: Stability of fixed points
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="KKMlWWU83Jg", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Initial values and trajectoriesHere, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
###Code
# @markdown Execute this cell to see the trajectories!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
plt.figure(figsize=(8, 5))
for ie in range(10):
pars['r_init'] = 0.1 * ie # set the initial value
r = simulate_single(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie,
label=r'r$_{\mathrm{init}}$=%.1f' % (0.1 * ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel(r'$r$(t)')
plt.legend(loc=[1.01, -0.06], fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Interactive Demo: dynamics as a function of the initial valueLet's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
###Code
# @title
# @markdown Make sure you execute this cell to enable the widget!
pars = default_pars_single(w=5.0, I_ext=0.5)
def plot_single_diffEinit(r_init):
pars['r_init'] = r_init
r = simulate_single(pars)
plt.figure()
plt.plot(pars['range_t'], r, 'b', zorder=1)
plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)
plt.xlabel('t (ms)', fontsize=16)
plt.ylabel(r'$r(t)$', fontsize=16)
plt.ylim(0, 1.0)
plt.show()
_ = widgets.interact(plot_single_diffEinit, r_init=(0, 1, 0.02))
# to_remove explanation
"""
Discussion:
To better appreciate what is happening here, you should go back to the previous
interactive demo. Set the w = 5 and I_ext = 0.5.
You will find that there are three fixed points of the system for these values of
w and I_ext. Now, choose the initial value in this demo and see in which direction
the system output moves. When r_init is in the vicinity of the leftmost fixed points
it moves towards the left most fixed point. When r_init is in the vicinity of the
rightmost fixed points it moves towards the rightmost fixed point.
""";
###Output
_____no_output_____
###Markdown
Stability analysis via linearization of the dynamicsJust like Equation $1$ in the case ($w=0$) discussed above, a generic linear system $$\frac{dx}{dt} = \lambda (x - b),$$ has a fixed point for $x=b$. The analytical solution of such a system can be found to be:$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$ Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as: $$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, "**unstable**" . Compute the stability of Equation $1$Similar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\epsilon$, i.e. $r = r^{*} + \epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:\begin{align}\tau \frac{d\epsilon}{dt} \approx -\epsilon + w F'(w\cdot r^{*} + I_{\text{ext}};a,\theta) \epsilon \end{align}where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:\begin{align}\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau }[-1 + w F'(w\cdot r^* + I_{\text{ext}};a,\theta)] \end{align}That is, as in the linear system above, the value of$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau \qquad (4)$$determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system. Exercise 4: Compute $dF$The derivative of the sigmoid transfer function is:\begin{align} \frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \qquad (5)\end{align}Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
###Code
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
###########################################################################
# TODO for students: compute dFdx ##
raise NotImplementedError("Student excercise: compute the deravitive of F")
###########################################################################
# Calculate the population activation
dFdx = ...
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# df = dF(x, pars['a'], pars['theta'])
# plot_dFdt(x, df)
# to_remove solution
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
# Calculate the population activation
dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
df = dF(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_dFdt(x, df)
###Output
_____no_output_____
###Markdown
Exercise 5: Compute eigenvaluesAs discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?Note that the expression of the eigenvalue at fixed point $r^*$$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau$$
###Code
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
#####################################################################
## TODO for students: compute eigenvalue and disable the error
raise NotImplementedError("Student excercise: compute the eigenvalue")
######################################################################
# Compute the eigenvalue
eig = ...
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
# Uncomment below lines after completing the eig_single function.
# for fp in x_fp:
# eig_fp = eig_single(fp, **pars)
# print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
###Output
_____no_output_____
###Markdown
**SAMPLE OUTPUT**```Fixed point1 at 0.042 with Eigenvalue=-0.583Fixed point2 at 0.447 with Eigenvalue=0.498Fixed point3 at 0.900 with Eigenvalue=-0.626```
###Code
# to_remove solution
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
# Compute the eigenvalue
eig = (-1. + w * dF(w * fp + I_ext, a, theta)) / tau
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
for fp in x_fp:
eig_fp = eig_single(fp, **pars)
print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
###Output
_____no_output_____
###Markdown
Think! Throughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$?
###Code
# to_remove explanation
"""
Discussion:
You can check this by going back the second last interactive demo and set the
weight to w<0. You will notice that the system has only one fixed point and that
is at zero value. For this particular dynamics, the system will eventually converge
to zero. But try it out.
""";
###Output
_____no_output_____
###Markdown
--- Bonus 2: Noisy input drives the transition between two stable states Ornstein-Uhlenbeck (OU) processAs discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows: $$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
###Code
# @title OU process `my_OU(pars, sig, myseed=False)`
# @markdown Make sure you execute this cell to visualize the noise!
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt - 1):
I_ou[it + 1] = (I_ou[it]
+ dt / tau_ou * (0. - I_ou[it])
+ np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1])
return I_ou
pars = default_pars_single(T=100)
pars['tau_ou'] = 1. # [ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=2020)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'r')
plt.xlabel('t (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$')
plt.show()
###Output
_____no_output_____
###Markdown
Example: Up-Down transitionIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
###Code
# @title Simulation of an E population with OU inputs
# @markdown Make sure you execute this cell to spot the Up-Down states!
pars = default_pars_single(T=1000)
pars['w'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. # [ms]
pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
r = simulate_single(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], r, 'b', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel(r'$r(t)$')
plt.show()
###Output
_____no_output_____ |
CourseContent/Notebooks/Mapping/9 - Himalaya region maps and images.ipynb | ###Markdown
Figures 2a, 2b and 3 from my SIAM News publication in 2015: http://www.moresi.info/posts/Computational-Challenges-SIAM-NEWS/Caption, Figure 2 - _One of the most dramatic departures from plate-like deformation on Earth occurs where the Indian subcontinent is colliding with the Eurasian continent. The map on the left is a satellite image with the flow lines from the plate motion vector field drawn in red. On the right is the same region showing 50 years of earthquake data for events larger than magnitude 4.5, colored by depth and superimposed on the strain rate._Caption, Figure 3 - _A low-angle view of a numerical model of continental collision using the Underworld particle-in-cell finite element code. **The map (1) shows the how to interpret the model in terms of the India-Eurasia collision.** In the movie, the (Indian) indentor heads towards the viewer and crumples the crust into a mountain belt in the foreground. In the background, the crust escapes away from the viewer pulled by the subduction zone in the background. Snapshots from the movie: (2), pre-collision and (3), late in the collision._
###Code
%%sh
# This will run notebook 0 to download all the required data for this example.
# Should be pretty quick since most data will be cached and skipped / copied.
runipy '0 - Preliminaries.ipynb' --quiet
%pylab inline
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from cartopy.io import PostprocessedRasterSource, LocatedImage
from cartopy.io import srtm
from cartopy.io.srtm import SRTM3Source
import cartopy.feature as cfeature
from osgeo import gdal
from osgeo import gdal_array
import scipy.ndimage
import scipy.misc
# The colormap routine creates enormous arrays in intermediary calculations. This is
# a way to avoid memory errors: process to RGB (int8) in advance
def apply_colormap_to_image(rawimage, colormap, norm):
greyimage = norm(rawimage)
rgbimage = np.empty((greyimage.shape[0], greyimage.shape[1] , 4), dtype=uint8)
for i in range(0, greyimage.shape[0]):
rgbimage[i,:,:] = colormap(greyimage[i,:]) * 256
rgbimage2 = rgbimage[:,:,0:3]
return rgbimage2
base_projection = ccrs.PlateCarree()
global_extent = [-180.0, 180.0, -90.0, 90.0]
etopo1 = gdal.Open("Resources/color_etopo1_ice_low.tif")
etopo_img = etopo1.ReadAsArray().transpose(1,2,0)
del(etopo1)
# Height field only ...
etopoH = gdal.Open("Resources/ETOPO1_Ice_c_geotiff.tif")
etopoH_img = etopoH.ReadAsArray()[::2,::2].astype(numpy.float16)
del(etopoH)
colormap = plt.get_cmap('Greys_r')
norm = matplotlib.colors.Normalize(vmin=-5000, vmax=7500)
etopoH_img_grey = apply_colormap_to_image(etopoH_img, colormap, norm)
strainrate_extent=[-180,180,-68,80]
strainrate = numpy.loadtxt("Resources/sec_invariant_strain_0.2.dat")
strainrate_data = strainrate.reshape(741,1800,3) # I had to look at the data to work this out !
globalrelief = gdal.Open("Resources/HYP_50M_SR_W/HYP_50M_SR_W.tif")
globalrelief_img = globalrelief.ReadAsArray().transpose(1,2,0)
del(globalrelief)
globalbathym = gdal.Open("Resources/OB_50M/OB_50M.tif")
globalbathym_img = globalbathym.ReadAsArray().transpose(1,2,0)
del(globalbathym)
print "etopoH_img - ", etopoH_img.shape
print "globalrelief_img - ", globalrelief_img.shape
## If the shapes are different then see the cell below for a way to fix it.
blended_img = np.empty_like(globalrelief_img)
blended_img[...,0] = np.where( etopoH_img < 0.0, globalbathym_img[...,0], globalrelief_img[...,0] )
blended_img[...,1] = np.where( etopoH_img < 0.0, globalbathym_img[...,1], globalrelief_img[...,1] )
blended_img[...,2] = np.where( etopoH_img < 0.0, globalbathym_img[...,2], globalrelief_img[...,2] )
# Clean up ... we'll just keep the int8 rgb versions for plotting
del(globalbathym_img)
del(globalrelief_img)
del(etopoH_img)
# Do this if the relief / bathym sizes don't match the etopo data (to make the blended image)
# The datasets we downloaded can be manipulated trivially without the need for this and I have
# commented it all out so you can run all cells without reprocessing the data files.
"""
import scipy.ndimage
import scipy.misc
etopoH = gdal.Open("Resources/ETOPO1_Ice_g_geotiff.tif")
etopoH_img = etopoH.ReadAsArray()
print
etopoH_transform = etopoH.GetGeoTransform()
globalrelief_transform = globalrelief.GetGeoTransform()
# Resize to match globalrelief ... this resize is int only ??
globaletopoH = scipy.misc.imresize(etopoH_img, globalrelief_img.shape, mode='F')
## How to turn this array back into the appropriate geotiff
from osgeo import gdal
from osgeo import osr
# data exists in 'ary' with values range 0 - 255
# Uncomment the next line if ary[0][0] is upper-left corner
#ary = numpy.flipup(ary)
Ny, Nx = globaletopoH.shape
driver = gdal.GetDriverByName("GTiff")
# Final argument is optional but will produce much smaller output file
ds = driver.Create('output.tif', Nx, Ny, 1, gdal.GDT_Float64, ['COMPRESS=LZW'])
# this assumes the projection is Geographic lat/lon WGS 84
srs = osr.SpatialReference()
srs.ImportFromEPSG(4326)
ds.SetProjection(srs.ExportToWkt())
ds.SetGeoTransform( globalrelief_transform ) # define GeoTransform tuple
ds.GetRasterBand(1).WriteArray(globaletopoH)
ds = None
"""
pass
# img_heights = etopo_img.reshape(-1)
base_projection = ccrs.PlateCarree()
global_extent = [ -180, 180, -90, 90 ]
coastline = cfeature.NaturalEarthFeature('physical', 'coastline', '50m',
edgecolor=(0.0,0.0,0.0),
facecolor="none")
rivers = cfeature.NaturalEarthFeature('physical', 'rivers_lake_centerlines', '50m',
edgecolor='Blue', facecolor="none")
lakes = cfeature.NaturalEarthFeature('physical', 'lakes', '50m',
edgecolor="blue", facecolor="blue")
ocean = cfeature.NaturalEarthFeature('physical', 'ocean', '50m',
edgecolor="green",
facecolor="blue")
graticules_5 = cfeature.NaturalEarthFeature('physical', 'graticules_5', '10m',
edgecolor="black", facecolor=None)
from obspy.core import event
from obspy.fdsn import Client
from obspy import UTCDateTime
client = Client("IRIS")
himalaya_extent = [65, 110, 5, 45 ]
starttime = UTCDateTime("1965-01-01")
endtime = UTCDateTime("2016-01-01")
cat = client.get_events(starttime=starttime, endtime=endtime,
minlongitude=himalaya_extent[0],
maxlongitude=himalaya_extent[1],
minlatitude=himalaya_extent[2],
maxlatitude=himalaya_extent[3],
minmagnitude=4.5, catalog="ISC")
print cat.count(), " events in catalogue"
# Unpack the opspy data into a plottable array
event_count = cat.count()
eq_origins = np.zeros((event_count, 5))
for ev, event in enumerate(cat.events):
eq_origins[ev,0] = dict(event.origins[0])['longitude']
eq_origins[ev,1] = dict(event.origins[0])['latitude']
eq_origins[ev,2] = dict(event.origins[0])['depth']
eq_origins[ev,3] = dict(event.magnitudes[0])['mag']
eq_origins[ev,4] = (dict(event.origins[0])['time']).date.year
from netCDF4 import Dataset
# rootgrp = Dataset("Resources/velocity_EU.nc", "r", format="NETCDF4")
rootgrp = Dataset("Resources/velocity_EU.nc", "r", format="NETCDF4")
ve = rootgrp.variables["ve"]
vn = rootgrp.variables["vn"]
lonv = rootgrp.variables["lon"]
latv = rootgrp.variables["lat"]
lons = lonv[::1]
lats = latv[::1]
llX, llY = np.meshgrid(lons,lats)
#llX = llX.reshape(-1)
#llY = llY.reshape(-1)
Veast = (np.array(ve[::1,::1]).T)
Vnorth = (np.array(vn[::1,::1]).T)
Vorientation = np.arctan2(Vnorth,Veast)
Vspeed = np.sqrt(Veast**2 + Vnorth**2)
## Figure 2a is a land / ocean image with coastlines and rivers over the top.
## The red / grey lines are streamlines of the plate motion data which show trajectories in
## a way which is not as intrusive as a bunch of arrows.
from matplotlib.transforms import offset_copy
import cartopy.crs as ccrs
import cartopy.io.img_tiles as cimgt
import gdal
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import matplotlib.ticker as mticker
# Map / Image Tile machinery
map_quest_aerial = cimgt.MapQuestOpenAerial()
mapbox_satellite = cimgt.MapboxTiles(map_id='mapbox.satellite',
access_token='pk.eyJ1IjoibG91aXNtb3Jlc2kiLCJhIjoiY2lqM2liajRnMDA1d3Zia254c3d0aWNzOCJ9.FO_VUuxm9tHhzlffiKwcig')
# Choose one of the services above. I used map_tiles = mapbox_satellite for the published
# plot, but this does require registration here: https://www.mapbox.com/help/create-api-access-token/
# The map_quest image does not seem to be registered correctly with the coastline so I am probably doing something
# wrong. But the mapbox one looks perfectly fine !
map_tiles = map_quest_aerial
fig = plt.figure(figsize=(12, 12), facecolor="none")
# ax = plt.axes(projection=ccrs.PlateCarree(), extent=himalaya_extent)
# Create a GeoAxes in the tile's projection.
ax = plt.axes(projection=map_tiles.crs)
# Limit the extent of the map to a small longitude/latitude range.
ax.set_extent(himalaya_extent)
# Add the MapQuest data at zoom level 8.
ax.streamplot(lons, lats, Veast, Vnorth, linewidth=0.25, color='black',
cmap=cm.gray_r, density=5.0, transform=ccrs.PlateCarree(), zorder=0, arrowstyle='-')
ax.add_image(map_tiles, 6, alpha=0.85, zorder=2)
streamlines = ax.streamplot(lons, lats, Veast, Vnorth, linewidth=1+Vspeed*0.05, color='#883300', cmap=cm.Reds_r,
transform=ccrs.PlateCarree(), zorder=4)
streamlines.lines.set_alpha(0.5)
ax.add_feature(coastline, linewidth=1.5, edgecolor="White", zorder=10)
ax.add_feature(rivers, linewidth=1.0, edgecolor="#0077FF", zorder=13)
ax.add_feature(rivers, linewidth=3.0, edgecolor="#002299", zorder=12, alpha=0.5)
ax.add_feature(lakes, linewidth=0, edgecolor="Blue", facecolor="#4477FF", zorder=11, alpha=0.5)
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=0.5, color='#222222', alpha=1.0, linestyle=':' )
gl.xlabels_top = False
gl.ylabels_right = False
# gl.xlines = False
# gl.xlines = False
gl.xlocator = mticker.FixedLocator(np.arange(65,110,5))
glyxlocator = mticker.FixedLocator(np.arange(5,45,5))
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 18, 'color': 'black'}
gl.ylabel_style = {'size': 18, 'color': 'black'}
fig.savefig("HimalayaRivers.png", dpi=300)
## This is figure 2b ... greyscale topography and bathymetry with strain rate contours and
## earthquake hypocentres plotted on top
import gdal
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import matplotlib.ticker as mticker
fig = plt.figure(figsize=(12, 12), facecolor="none")
ax = plt.axes(projection=ccrs.PlateCarree(), extent=himalaya_extent)
ax.imshow(etopoH_img_grey, transform=ccrs.PlateCarree(), origin="upper",
alpha=1.0, extent=global_extent, interpolation="spline16", zorder=1)
mappable2 = ax.contourf(strainrate_data[:,:,0], strainrate_data[:,:,1], strainrate_data[:,:,2],
levels=[ 25, 50, 75 , 100, 150, 200, 250, 300 ], linestyle=None, vmin=5.0, vmax=300,
transform=base_projection, cmap=cm.OrRd_r, alpha=0.95, linewidth=2.0,
extent=strainrate_extent, extend="max", zorder=12)
# plt.colorbar(mappable=mappable2)
ax.add_feature(coastline, linewidth=1.5, edgecolor="Black", zorder=10)
# ax.add_feature(rivers, linewidth=1, edgecolor="Blue", zorder=12)
# ax.add_feature(lakes, linewidth=1, edgecolor="Blue", zorder=13, alpha=0.25)
# ax.add_feature(graticules_5, linewidth=0.5, linestyle=":", edgecolor="gray", zorder=4, alpha=0.75)
# ax.add_feature(ocean, facecolor=(0.4,0.4,0.6), edgecolor=(0.0,0.0,0.0), linewidth=1, alpha=0.75, zorder=4)
depth_scale = ax.scatter(eq_origins[:,0], eq_origins[:,1], 50.0*(eq_origins[:,3]-4.5), c=eq_origins[:,2], marker='o',
cmap=cm.Blues_r, vmin=35000, vmax=100000, alpha = 0.85, linewidth=0.5, zorder=20)
# plt.colorbar(mappable=depth_scale)
## Labels
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=0.5, color='#222222', alpha=1.0, linestyle=':' )
gl.xlabels_top = False
gl.ylabels_left = False
# gl.xlines = False
# gl.xlines = False
gl.xlocator = mticker.FixedLocator(np.arange(65,110,5))
glyxlocator = mticker.FixedLocator(np.arange(5,45,5))
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 18, 'color': 'black'}
gl.ylabel_style = {'size': 18, 'color': 'black'}
## Legendary stuff
# For the published figure, I used these circles to give me the scale and colour
# but made my own legend in a drawing program
mag4_color = cm.Blues( 1.0 )
mag4_dot35km = ax.scatter(66.0, 6.0, 50.0*(4.6-4.5), marker='o', color=mag4_color,
vmin=30000, vmax=100000, alpha = 0.85, linewidth=0.5, zorder=21)
mag5_color = cm.Blues( 1.0- (50 - 30) / 70 )
mag5_dot50km = ax.scatter(66.0, 7.0, 50.0*(5.0-4.5), marker='o', color=mag5_color,
vmin=30000, vmax=100000, alpha = 0.85, linewidth=0.5, zorder=21)
mag6_color = cm.Blues(1.0- (70 - 30) / 70 )
mag6_dot70km = ax.scatter(66.0, 8.0, 50.0*(6.0-4.5), marker='o', color=mag6_color,
vmin=30000, vmax=100000, alpha = 0.85, linewidth=0.5, zorder=21)
mag7_color = cm.Blues( 0.0 )
mag7_dot100km = ax.scatter(66.0, 9.0, 50.0*(7.0-4.5), marker='o', color=mag7_color,
vmin=30000, vmax=100000, alpha = 0.85, linewidth=0.5, zorder=21)
fig.savefig("HimalayaEQ.png", dpi=300)
## Figure 3a is the regional setting which is used as a base to draw an
## interpretation of some 3D models.
himalaya_region_extent = [ 55 , 135, -20, 45 ]
map_tiles = mapbox_satellite
map_tiles = map_quest_aerial
fig = plt.figure(figsize=(12, 12), facecolor="none")
# ax = plt.axes(projection=ccrs.PlateCarree(), extent=himalaya_extent)
# Create a GeoAxes in the tile's projection.
ax = plt.axes(projection=map_tiles.crs)
# Limit the extent of the map
ax.set_extent(himalaya_region_extent)
ax.add_image(map_tiles, 5, alpha=0.45, zorder=2)
ax.add_feature(coastline, linewidth=1.5, edgecolor="Black", zorder=1)
fig.savefig("HimalayaRegionalMap.png", dpi=300)
## An alternative to Figure 2a !
theCM = cm.get_cmap('Oranges')
theCM._init()
alphas = np.abs(np.linspace(0.0, 1.0, theCM.N))
theCM._lut[:-3,-1] = alphas**0.25
fig = plt.figure(figsize=(12, 12), facecolor="none")
ax = plt.axes(projection=ccrs.PlateCarree(), extent=himalaya_extent)
# plt.imshow(strainrate_img, cmap=theCM, origin="lower", interpolation="spline16", extent=strainrate_extent,
# vmin=-1, vmax=100)
plt.imshow(etopo_img, transform=ccrs.PlateCarree(), extent=[-180,180,-90,90], alpha=0.5)
ax.contourf(strainrate_data[:,:,0], strainrate_data[:,:,1], strainrate_data[:,:,2],
levels=[20,30,40,50], linewidth=0.0, vmin=20.0, vmax=200,
transform=base_projection, cmap="YlOrRd", zorder=2, alpha=0.25, extent=strainrate_extent)
ax.contourf(strainrate_data[:,:,0], strainrate_data[:,:,1], strainrate_data[:,:,2],
levels=[60,70,80,90,100, 200], linewidth=0.0, vmin=20.0, vmax=200,
transform=base_projection, cmap="YlOrRd", zorder=2, alpha=0.5, extent=strainrate_extent)
ax.add_feature(coastline, linewidth=1.5, edgecolor="Black", zorder=1)
ax.add_feature(rivers, linewidth=1, edgecolor="Blue", zorder=2)
ax.add_feature(lakes, linewidth=1, edgecolor="Blue", zorder=3, alpha=0.25)
###Output
_____no_output_____ |
tutorial_visium_hne.ipynb | ###Markdown
Analyze Visium H&E data=======================This tutorial shows how to apply Squidpy for the analysis of Visiumspatial transcriptomics data.The dataset used here consists of a Visium slide of a coronal section ofthe mouse brain. The original dataset is publicly available at the 10xgenomics [datasetportal](https://support.10xgenomics.com/spatial-gene-expression/datasets). Here, we provide a pre-processed dataset, with pre-annotated clusters,in AnnData format and the tissue image in `squidpy.im.ImageContainer`format.A couple of notes on pre-processing:- The pre-processing pipeline is the same as the one shown in the original [Scanpy tutorial](https://scanpy-tutorials.readthedocs.io/en/latest/spatial/basic-analysis.html) .- The cluster annotation was performed using several resources, such as the [Allen Brain Atlas](http://mouse.brain-map.org/experiment/thumbnails/100048576?image_type=atlas) , the [Mouse Brain gene expression atlas](http://mousebrain.org/genesearch.html) from the Linnarson lab and this recent [pre-print](https://www.biorxiv.org/content/10.1101/2020.07.24.219758v1) .::: {.seealso}See `sphx_glr_auto_tutorials_tutorial_visium_fluo.py` for a detailedanalysis example of image features.:::Import packages & data----------------------To run the notebook locally, create a conda environment as *conda envcreate -f environment.yml* using this[environment.yml](https://github.com/theislab/squidpy_notebooks/blob/master/environment.yml)
###Code
import scanpy as sc
import anndata as ad
import squidpy as sq
import numpy as np
import pandas as pd
sc.logging.print_header()
print(f"squidpy=={sq.__version__}")
# load the pre-processed dataset
img = sq.datasets.visium_hne_image()
adata = sq.datasets.visium_hne_adata()
###Output
_____no_output_____
###Markdown
First, let\'s visualize cluster annotation in spatial context with`scanpy.pl.spatial`.
###Code
sc.pl.spatial(adata, color="cluster")
###Output
_____no_output_____
###Markdown
Image features==============Visium datasets contain high-resolution images of the tissue that wasused for the gene extraction. Using the function`squidpy.im.calculate_image_features` you can calculate image featuresfor each Visium spot and create a `obs x features` matrix in `adata`that can then be analyzed together with the `obs x gene` gene expressionmatrix.By extracting image features we are aiming to get both similar andcomplementary information to the gene expression values. Similarinformation is for example present in the case of a tissue with twodifferent cell types whose morphology is different. Such cell typeinformation is then contained in both the gene expression values and thetissue image features.Squidpy contains several feature extractors and a flexible pipeline ofcalculating features of different scales and sizes. There are severaldetailed examples of how to use `squidpy.im.calculate_image_features`.`sphx_glr_auto_examples_image_compute_features.py` provides a goodstarting point for learning more.Here, we will extract [summary]{.title-ref} features at different cropsizes and scales to allow the calculation of multi-scale features and[segmentation]{.title-ref} features. For more information on the summaryfeatures, also refer to`sphx_glr_auto_examples_image_compute_summary_features.py`.
###Code
# calculate features for different scales (higher value means more context)
for scale in [1.0, 2.0]:
feature_name = f"features_summary_scale{scale}"
sq.im.calculate_image_features(
adata,
img,
features="summary",
key_added=feature_name,
n_jobs=1,
scale=scale,
)
# combine features in one dataframe
adata.obsm["features"] = pd.concat(
[adata.obsm[f] for f in adata.obsm.keys() if "features_summary" in f], axis="columns"
)
# make sure that we have no duplicated feature names in the combined table
adata.obsm["features"].columns = ad.utils.make_index_unique(adata.obsm["features"].columns)
###Output
_____no_output_____
###Markdown
We can use the extracted image features to compute a new clusterannotation. This could be useful to gain insights in similarities acrossspots based on image morphology.
###Code
# helper function returning a clustering
def cluster_features(features: pd.DataFrame, like=None):
"""Calculate leiden clustering of features.
Specify filter of features using `like`.
"""
# filter features
if like is not None:
features = features.filter(like=like)
# create temporary adata to calculate the clustering
adata = ad.AnnData(features)
# important - feature values are not scaled, so need to scale them before PCA
sc.pp.scale(adata)
# calculate leiden clustering
sc.pp.pca(adata, n_comps=min(10, features.shape[1] - 1))
sc.pp.neighbors(adata)
sc.tl.leiden(adata)
return adata.obs["leiden"]
# calculate feature clusters
adata.obs["features_cluster"] = cluster_features(adata.obsm["features"], like="summary")
# compare feature and gene clusters
sc.set_figure_params(facecolor="white", figsize=(8, 8))
sc.pl.spatial(adata, color=["features_cluster", "cluster"])
###Output
_____no_output_____
###Markdown
Comparing gene and feature clusters, we notice that in some regions,they look very similar, like the cluster *Fiber\_tract*, or clustersaround the Hippocampus seems to be roughly recapitulated by the clustersin image feature space. In others, the feature clusters look different,like in the cortex, where the gene clusters show the layered structureof the cortex, and the features clusters rather seem to show differentregions of the cortex.This is only a simple, comparative analysis of the image features, notethat you could also use the image features to e.g. compute a commonimage and gene clustering by computing a shared neighbors graph (forinstance on concatenated PCAs on both feature spaces). Spatial statistics and graph analysis=====================================Similar to other spatial data, we can investigate spatial organizationby leveraging spatial and graph statistics in Visium data.Neighborhood enrichment-----------------------Computing a neighborhood enrichment can help us identify spots clustersthat share a common neighborhood structure across the tissue. We cancompute such score with the following function:`squidpy.gr.nhood_enrichment`. In short, it\'s an enrichment score onspatial proximity of clusters: if spots belonging to two differentclusters are often close to each other, then they will have a high scoreand can be defined as being *enriched*. On the other hand, if they arefar apart, and therefore are seldom a neighborhood, the score will below and they can be defined as *depleted*. This score is based on apermutation-based test, and you can set the number of permutations withthe `n_perms` argument (default is 1000).Since the function works on a connectivity matrix, we need to computethat as well. This can be done with `squidpy.gr.spatial_neighbors`.Please see `sphx_glr_auto_examples_graph_compute_spatial_neighbors.py`for more details of how this function works.Finally, we\'ll directly visualize the results with`squidpy.pl.nhood_enrichment`.
###Code
sq.gr.spatial_neighbors(adata)
sq.gr.nhood_enrichment(adata, cluster_key="cluster")
sq.pl.nhood_enrichment(adata, cluster_key="cluster")
###Output
_____no_output_____
###Markdown
Given the spatial organization of the mouse brain coronal section, notsurprisingly we find high neighborhood enrichment the Hippocampusregion: *Pyramidal\_layer\_dentate\_gyrus* and *Pyramidal\_layer*clusters seems to be often neighbors with the larger *Hippocampus*cluster. Co-occurrence across spatial dimensions=======================================In addition to the neighbor enrichment score, we can visualize clusterco-occurrence in spatial dimensions. This is a similar analysis of theone presented above, yet it does not operate on the connectivity matrix,but on the original spatial coordinates. The co-occurrence score isdefined as:$$\frac{p(exp|cond)}{p(exp)}$$where $p(exp|cond)$ is the conditional probability of observing acluster $exp$ conditioned on the presence of a cluster $cond$, whereas$p(exp)$ is the probability of observing $exp$ in the radius size ofinterest. The score is computed across increasing radii size around eachobservation (i.e. spots here) in the tissue.We are gonna compute such score with `squidpy.gr.co_occurrence` and setthe cluster annotation for the conditional probability with the argument`clusters`. Then, we visualize the results with`squidpy.pl.co_occurrence`.
###Code
sq.gr.co_occurrence(adata, cluster_key="cluster")
sq.pl.co_occurrence(
adata,
cluster_key="cluster",
clusters="Hippocampus",
figsize=(8, 4),
)
###Output
_____no_output_____
###Markdown
The result largely recapitulates the previous analysis: the*Pyramidal\_layer* cluster seem to co-occur at short distances with thelarger *Hippocampus* cluster. It should be noted that the distance unitsare given in pixels of the Visium `source_image`, and corresponds to thesame unit of the spatial coordinates saved in `adata.obsm["spatial"]`. Ligand-receptor interaction analysis====================================We are continuing the analysis showing couple of feature-level methodsthat are very relevant for the analysis of spatial molecular data. Forinstance, after quantification of cluster co-occurrence, we might beinterested in finding molecular instances that could potentially drivecellular communication. This naturally translates in a ligand-receptorinteraction analysis. In Squidpy, we provide a fast re-implementationthe popular method CellPhoneDB `cellphonedb`([code](https://github.com/Teichlab/cellphonedb) ) and extended itsdatabase of annotated ligand-receptor interaction pairs with the populardatabase *Omnipath* `omnipath`. You can run the analysis for allclusters pairs, and all genes (in seconds, without leaving thisnotebook), with `squidpy.gr.ligrec`. Furthermore, we\'ll directlyvisualize the results, filtering out lowly-expressed genes (with the`means_range` argument) and increasing the threshold for the adjustedp-value (with the `alpha` argument). We\'ll also subset thevisualization for only one source group, the *Hippocampus* cluster, andtwo target groups, *Pyramidal\_layer\_dentate\_gyrus* and*Pyramidal\_layer* cluster.
###Code
sq.gr.ligrec(
adata,
n_perms=100,
cluster_key="cluster",
)
sq.pl.ligrec(
adata,
cluster_key="cluster",
source_groups="Hippocampus",
target_groups=["Pyramidal_layer", "Pyramidal_layer_dentate_gyrus"],
means_range=(3, np.inf),
alpha=1e-4,
swap_axes=True,
)
###Output
_____no_output_____
###Markdown
The dotplot visualization provides an interesting set of candidateligand-receptor annotation that could be involved in cellularinteractions in the Hippocampus. A more refined analysis would be forinstance to integrate these results with the results of a deconvolutionmethod, to understand what\'s the proportion of single-cell cell typespresent in this region of the tissue. Spatially variable genes with Moran\'s I========================================Finally, we might be interested in finding genes that show spatialpatterns. There are several methods that aimed at address thisexplicitly, based on point processes or Gaussian process regressionframework:- *SPARK* - [paper](https://www.nature.com/articles/s41592-019-0701-7Abs1), [code](https://github.com/xzhoulab/SPARK).- *Spatial DE* - [paper](https://www.nature.com/articles/nmeth.4636), [code](https://github.com/Teichlab/SpatialDE).- *trendsceek* - [paper](https://www.nature.com/articles/nmeth.4634), [code](https://github.com/edsgard/trendsceek).- *HMRF* - [paper](https://www.nature.com/articles/nbt.4260), [code](https://bitbucket.org/qzhudfci/smfishhmrf-py).Here, we provide a simple approach based on the well-known [Moran\'s Istatistics](https://en.wikipedia.org/wiki/Moran%27s_I) which is in factused also as a baseline method in the spatially variable gene paperslisted above. The function in Squidpy is called `squidpy.gr.moran`, andreturns both test statistics and adjusted p-values in`anndata.AnnData.var` slot. For time reasons, we will evaluate a subsetof the highly variable genes only.
###Code
genes = adata[:, adata.var.highly_variable].var_names.values
sq.gr.moran(
adata,
genes=genes,
)
###Output
_____no_output_____
###Markdown
The results are saved in `adata.uns['moranI']` slot. Genes have alreadybeen sorted by Moran\'s I statistic.
###Code
adata.uns["moranI"].head(10)
###Output
_____no_output_____
###Markdown
We can select few genes and visualize their expression levels in thetissue with `scanpy.pl.spatial`
###Code
sc.pl.spatial(adata, color=["Olfm1", "Plp1", "Itpka", "cluster"])
###Output
_____no_output_____ |
notebooks/SummarizePRIME.ipynb | ###Markdown
Visuals
###Code
#atchley_symbols=['L1: Polarity index', 'L2: Secondary structure factor', 'L3: Volume', 'L4: Refractivity/Heat Capacity', 'L5: Charge/Iso-electric point']
# Overall
source = overall_pvalue_df
overall_pvalue_df["Physiochemical property"] = "Overall"
bar = alt.Chart(source).mark_bar(opacity=0.8).encode(
x='Site',
y='p-value',
color="Physiochemical property"
).properties(
width=800,
height=600,
title= "BDNF PRIME results")
# Polarity
source = p1_pvalue_df
p1_pvalue_df["Physiochemical property"] = "Polarity"
bar1 = alt.Chart(source).mark_bar(color="red").encode(
x='Site:Q',
y='p1',
color="Physiochemical property"
).properties()
# "Secondary structure"
source = p2_pvalue_df
p2_pvalue_df["Physiochemical property"] = "Secondary structure"
bar2 = alt.Chart(source).mark_bar().encode(
x='Site:Q',
y='p2',
color="Physiochemical property"
).properties()
# Volume
source = p3_pvalue_df
p3_pvalue_df["Physiochemical property"] = "Volume"
bar3= alt.Chart(source).mark_bar(color="green").encode(
x='Site:Q',
y='p3',
color="Physiochemical property"
).properties()
# Refractivity/Heat Capacity
source = p4_pvalue_df
p4_pvalue_df["Physiochemical property"] = "Refractivity/Heat Capacity"
bar4 = alt.Chart(source).mark_bar(color="red").encode(
x='Site:Q',
y='p4',
color="Physiochemical property"
).properties()
# Charge/Iso-electric point
source = p5_pvalue_df
p5_pvalue_df["Physiochemical property"] = "Charge/Iso-electric point"
bar5 = alt.Chart(source).mark_bar().encode(
x='Site:Q',
y='p5',
color="Physiochemical property"
).properties(title="a")
bar + bar1 + bar2 + bar3 + bar4 + bar5
#bar + bar1 + bar2 + bar3 + bar4 + bar5
#bar5
p1_pvalue_df
source = overall_pvalue_df
bar = alt.Chart(source).mark_bar(opacity=0.8).encode(
x='Site',
y='p-value'
).properties(
width=800,
height=600,
title= "BDNF PRIME results")
bar
#atchley_symbols=['L1: Polarity index', 'L2: Secondary structure factor', 'L3: Volume', 'L4: Refractivity/Heat Capacity', 'L5: Charge/Iso-electric point']
# Overall
source = overall_pvalue_df
overall_pvalue_df["Physiochemical property"] = "Overall"
bar = alt.Chart(source).mark_circle(size=240, opacity=0.8).encode(
x='Site',
y='p-value',
color="Physiochemical property"
).properties(
width=800,
height=600,
title= "BDNF PRIME results")
# Polarity
source = p1_pvalue_df
p1_pvalue_df["Physiochemical property"] = "Polarity"
bar1 = alt.Chart(source).mark_circle(size=240, opacity=0.8).encode(
x='Site:Q',
y='p1',
color="Physiochemical property"
).properties()
# "Secondary structure"
source = p2_pvalue_df
p2_pvalue_df["Physiochemical property"] = "Secondary structure"
bar2 = alt.Chart(source).mark_circle(size=240, opacity=0.8).encode(
x='Site:Q',
y='p2',
color="Physiochemical property"
).properties()
# Volume
source = p3_pvalue_df
p3_pvalue_df["Physiochemical property"] = "Volume"
bar3= alt.Chart(source).mark_circle(size=240, opacity=0.8).encode(
x='Site:Q',
y='p3',
color="Physiochemical property"
).properties()
# Refractivity/Heat Capacity
source = p4_pvalue_df
p4_pvalue_df["Physiochemical property"] = "Refractivity/Heat Capacity"
bar4 = alt.Chart(source).mark_circle(size=240, opacity=0.8).encode(
x='Site:Q',
y='p4',
color="Physiochemical property"
).properties()
# Charge/Iso-electric point
source = p5_pvalue_df
p5_pvalue_df["Physiochemical property"] = "Charge/Iso-electric point"
bar5 = alt.Chart(source).mark_circle(size=240, opacity=0.8).encode(
x='Site:Q',
y='p5',
color="Physiochemical property"
).properties(title="a")
bar + bar1 + bar2 + bar3 + bar4 + bar5
#bar + bar1 + bar2 + bar3 + bar4 + bar5
#bar5
source = overall_pvalue_df
alt.Chart(source).mark_circle().encode(
alt.X(alt.repeat("column"), type='quantitative'),
alt.Y(alt.repeat("row"), type='quantitative'),
color='Origin:N'
).properties(
width=150,
height=150
).repeat(
row=['Horsepower', 'Acceleration', 'Miles_per_Gallon'],
column=['Miles_per_Gallon', 'Acceleration', 'Horsepower']
).interactive()
#atchley_symbols=['L1: Polarity index', 'L2: Secondary structure factor', 'L3: Volume', 'L4: Refractivity/Heat Capacity', 'L5: Charge/Iso-electric point']
# Overall
source = overall_pvalue_df
overall_pvalue_df["Physiochemical property"] = "Overall"
bar = alt.Chart(source).mark_bar(opacity=0.8).encode(
x='Site',
y='p-value',
color=alt.Color("Physiochemical property", scale=alt.Scale(scheme="category10"))
).properties(
width=1200,
height=600,
title= "BDNF PRIME results")
# Polarity
source = p1_pvalue_df
p1_pvalue_df["Physiochemical property"] = "Polarity"
bar1 = alt.Chart(source).mark_bar().encode(
x='Site:Q',
y='p1',
color=alt.Color("Physiochemical property")
).properties(
width=1200,
height=600,
title= "BDNF PRIME results")
# "Secondary structure"
source = p2_pvalue_df
p2_pvalue_df["Physiochemical property"] = "Secondary structure"
bar2 = alt.Chart(source).mark_bar().encode(
x='Site:Q',
y='p2',
color=alt.Color("Physiochemical property")
).properties()
# Volume
source = p3_pvalue_df
p3_pvalue_df["Physiochemical property"] = "Volume"
bar3= alt.Chart(source).mark_bar().encode(
x='Site:Q',
y='p3',
color=alt.Color("Physiochemical property")
).properties()
# Refractivity/Heat Capacity
source = p4_pvalue_df
p4_pvalue_df["Physiochemical property"] = "Refractivity/Heat Capacity"
bar4 = alt.Chart(source).mark_bar().encode(
x='Site:Q',
y='p4',
color=alt.Color("Physiochemical property")
).properties()
# Charge/Iso-electric point
source = p5_pvalue_df
p5_pvalue_df["Physiochemical property"] = "Charge/Iso-electric point"
bar5 = alt.Chart(source).mark_bar().encode(
x='Site:Q',
y='p5',
color=alt.Color("Physiochemical property")
).properties()
#bar + bar1 + bar2 + bar3 + bar4 + bar5
#bar + bar1 + bar2 + bar3 + bar4 + bar5
#bar5
bar1 + bar2 + bar3 + bar4 + bar5
###Output
/Users/user/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
after removing the cwd from sys.path.
/Users/user/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:17: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
/Users/user/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:31: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
/Users/user/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:41: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
/Users/user/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:50: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
/Users/user/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:59: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
|
qiskit/terra/reduced_backends.ipynb | ###Markdown
Trusted Notebook" width="500 px" align="left"> Working with reduced backends and noise models.In this tutorial we will see how to target a sub-graph of a quantum device by reducing the coupling map from a device backend and performing noise simulations on this sub-graph by reducing the noise model for the backend.
###Code
from qiskit import *
from qiskit.compiler import transpile
from qiskit.transpiler.coupling import CouplingMap
from qiskit.tools.monitor import job_monitor
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Load IBMQ accounts and grab target device backend
###Code
IBMQ.load_accounts()
backend = IBMQ.get_backend('ibmq_16_melbourne')
print('Device has:', backend.configuration().n_qubits, 'qubits')
###Output
Device has: 14 qubits
###Markdown
Problem setupSuppose we have a quantum circuit that we want to run on a given quantum device that is smaller than the full size of the device, and which we want to map to a selected subset of qubits in the device layout. As an example, lets consider the standard Bell state circuit:
###Code
q = QuantumRegister(2)
c = ClassicalRegister(2)
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.cx(q[0], q[1])
qc.measure(q, c);
###Output
_____no_output_____
###Markdown
that we wish to run on qubits 11 and 12 of the target device. We will call these qubits the `target_qubits`:
###Code
target_qubits = [11, 12]
###Output
_____no_output_____
###Markdown
Standard transpilationLets transpile the circuit for the given backend, making sure to map the circuit to the `target_qubits`:
###Code
new_qc = transpile(qc, backend, initial_layout=target_qubits)
new_qc.draw()
###Output
_____no_output_____
###Markdown
We can see that the circuit gets transformed so that the number of qubits in the circuit is equal to the number of qubits in the device, and the gates did indeed get mapped to the `target_qubits`. The above example also highlights one of the downsides of transpiling circuits that are much smaller than the full size of the device; The transpiled circuit is padded with qubits that play no role in computation. As such, it may be of interest to extract the sub-graph of the device defined by the `target_qubits` and use that in the transpilation process. Reducing the device coupling mapA reduced coupling map is a map that takes the sub-graph of a device defined by `target_qubits` and returns the same sub-graph coupling map, but with qubits labeled from 0->(N-1) where N is the number of qubits in `target_qubits`. First we need to get the device coupling map and create a `CouplingMap` object from it.
###Code
cmap = backend.configuration().coupling_map
CMAP = CouplingMap(cmap)
print(CMAP)
###Output
[[1, 0], [1, 2], [2, 3], [4, 3], [4, 10], [5, 4], [5, 6], [5, 9], [6, 8], [9, 8], [9, 10], [7, 8], [11, 3], [11, 10], [11, 12], [12, 2], [13, 1], [13, 12]]
###Markdown
To reduce the coupling map defined by `target_qubits` we make use of the `CouplingMap.reduce()` method, and pass `target_qubits` to it.
###Code
red_cmap = CMAP.reduce(target_qubits)
print(red_cmap)
###Output
[[0, 1]]
###Markdown
We are returned a `CouplingMap` instance that shows the controlled-x gate connectivity between the `target_qubits` with the indices of the qubits reduced. This can now be passed to the transpile function to generate a new circuit that has the same number of qubits as the original circuit.
###Code
red_qc = transpile(qc, None, coupling_map=red_cmap)
red_qc.draw()
###Output
_____no_output_____
###Markdown
Executing a circuit transpiled with a reduced coupling map on a deviceTo `execute` a circuit compiled with a reduced layout on the actual device, one simply passes the `target_qubits` list as the `initial_layout`
###Code
job_device = execute(red_qc, backend, initial_layout=target_qubits)
job_monitor(job_device)
plot_histogram(job_device.result().get_counts())
###Output
_____no_output_____
###Markdown
Noise modelling using reduced noise modelsWe have just seen how to take the `coupling_map` of a device, reduce a given sub-graph of that device, and use it in the transpiling process and then execution on the device. We will now show how to do a noisy simulation using the reduced circuit and a noise model that is also reduced in size. The combination of reduced coupling and noise maps allows us to easily simulate sub-graphs of large, e.g. >50 qubit, devices in the presence of noise.First we need to import the noise library from the Aer provider
###Code
from qiskit.providers.aer import noise
###Output
_____no_output_____
###Markdown
Next we build a simplified noise model of the full device using data from the backend parameters:
###Code
properties = backend.properties()
noise_model = noise.device.basic_device_noise_model(properties)
basis_gates = noise_model.basis_gates
noise_model
###Output
_____no_output_____
###Markdown
Now we reduce the full noise model down using `remapping=target_qubits`, and `discard_qubits=True`. This latter keyword argument trims the noise model down to just those noise terms that correspond to the reduced target qubits.
###Code
red_noise_model = noise.utils.remap_noise_model(noise_model,
remapping=target_qubits,
discard_qubits=True)
red_noise_model
###Output
_____no_output_____
###Markdown
Verify reduced circuit and noise model give same results as full exampleTo show that the reduced and full noise models give the same answer we fix the random number generator in the simulator using the `seed_simulator` keyword argument.
###Code
sim_backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
Reduced model
###Code
job = execute(red_qc, sim_backend,
basis_gates = basis_gates,
noise_model=red_noise_model,
seed_simulator=123456)
res = job.result()
plot_histogram(res.get_counts())
###Output
_____no_output_____
###Markdown
Full model
###Code
job_full = execute(new_qc, sim_backend,
basis_gates = basis_gates,
noise_model=noise_model,
seed_simulator=123456)
res_full = job_full.result()
plot_histogram(res_full.get_counts())
###Output
_____no_output_____
###Markdown
Trusted Notebook" width="500 px" align="left"> Working with reduced backends and noise models.In this tutorial we will see how to target a sub-graph of a quantum device by reducing the coupling map from a device backend and performing noise simulations on this sub-graph by reducing the noise model for the backend.
###Code
from qiskit import *
from qiskit.compiler import transpile
from qiskit.transpiler.coupling import CouplingMap
from qiskit.tools.monitor import job_monitor
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Load IBMQ accounts and grab target device backend
###Code
IBMQ.load_accounts()
backend = IBMQ.get_backend('ibmq_16_melbourne')
print('Device has:', backend.configuration().n_qubits, 'qubits')
###Output
Device has: 14 qubits
###Markdown
Problem setupSuppose we have a quantum circuit that we want to run on a given quantum device that is smaller than the full size of the device, and which we want to map to a selected subset of qubits in the device layout. As an example, lets consider the standard Bell state circuit:
###Code
q = QuantumRegister(2)
c = ClassicalRegister(2)
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.cx(q[0], q[1])
qc.measure(q, c);
###Output
_____no_output_____
###Markdown
that we wish to run on qubits 11 and 12 of the target device. We will call these qubits the `target_qubits`:
###Code
target_qubits = [11, 12]
###Output
_____no_output_____
###Markdown
Standard transpilationLets transpile the circuit for the given backend, making sure to map the circuit to the `target_qubits`:
###Code
new_qc = transpile(qc, backend, initial_layout=target_qubits)
new_qc.draw()
###Output
_____no_output_____
###Markdown
We can see that the circuit gets transformed so that the number of qubits in the circuit is equal to the number of qubits in the device, and the gates did indeed get mapped to the `target_qubits`. The above example also highlights one of the downsides of transpiling circuits that are much smalelr than the full size of the device; The transpolated circuit is padded with qubits that play no role in computation. As such, it may be of interest to extract the sub-graph of the device defined by the `target_qubits` and use that in the transpilation process. Reducing the device coupling mapA reduced coupling map is a map that takes the sub-graph of a device defined by `target_qubits` and returns the same sub-graph coupling map, but with qubits labelled from 0->(N-1) where N is the number of qubits in `target_qubits`. First we need to get the device coupling map and create a `CouplingMap` object from it.
###Code
cmap = backend.configuration().coupling_map
CMAP = CouplingMap(cmap)
print(CMAP)
###Output
[[1, 0], [1, 2], [2, 3], [4, 3], [4, 10], [5, 4], [5, 6], [5, 9], [6, 8], [9, 8], [9, 10], [7, 8], [11, 3], [11, 10], [11, 12], [12, 2], [13, 1], [13, 12]]
###Markdown
To reduce the coupling map defined by `target_qubits` we make use of the `CouplingMap.reduce()` method, and pass `target_qubits` to it.
###Code
red_cmap = CMAP.reduce(target_qubits)
print(red_cmap)
###Output
[[0, 1]]
###Markdown
We are returned a `CouplingMap` instance that shows the controlled-x gate connectivity between the `target_qubits` with the indicies of the qubits reduced. This can now be passed to the transpile function to generate a new circuit that has the same number of qubits as the original circuit.
###Code
red_qc = transpile(qc, None, coupling_map=red_cmap)
red_qc.draw()
###Output
_____no_output_____
###Markdown
Executing a circuit transpiled with a reduced coupling map on a deviceTo `execute` a circuit compiled with a reduced layout on the actual device, one simply passes the `target_qubits` list as the `initial_layout`
###Code
job_device = execute(red_qc, backend, initial_layout=target_qubits)
job_monitor(job_device)
plot_histogram(job_device.result().get_counts())
###Output
_____no_output_____
###Markdown
Noise modelling using reduced noise modelsWe have just seen how to take the `coupling_map` of a device, reduce a given given sub-graph of that device, and use it in the transpiling process and then execution on the device. We will now show how to do a noisy simulation using the reduced circuit and a noise model that is also reduced in size. The combination of reduced coupling and noise maps allows us to easily simulate sub-graphs of large, e.g. >50 qubit, devices in the presence of noise.First we need to import the noise library from the Aer provider
###Code
from qiskit.providers.aer import noise
###Output
_____no_output_____
###Markdown
Next we build a simplified noise model of the full device using data from the backend parameters:
###Code
properties = backend.properties()
noise_model = noise.device.basic_device_noise_model(properties)
basis_gates = noise_model.basis_gates
noise_model
###Output
_____no_output_____
###Markdown
Now we reduce the full noise model down using `remapping=target_qubits`, and `discard_qubits=True`. This latter keyword argument trims the noise model down to just those noise terms that correspond to the reduced target qubits.
###Code
red_noise_model = noise.utils.remap_noise_model(noise_model,
remapping=target_qubits,
discard_qubits=True)
red_noise_model
###Output
_____no_output_____
###Markdown
Verify reduced circuit and noise model give same results as full exampleTo show that the reduced and full noise models give the same answer we fix the random number generator in the simulator using the `seed_simulator` keyword argument.
###Code
sim_backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
Reduced model
###Code
job = execute(red_qc, sim_backend,
basis_gates = basis_gates,
noise_model=red_noise_model,
seed_simulator=123456)
res = job.result()
plot_histogram(res.get_counts())
###Output
_____no_output_____
###Markdown
Full model
###Code
job_full = execute(new_qc, sim_backend,
basis_gates = basis_gates,
noise_model=noise_model,
seed_simulator=123456)
res_full = job_full.result()
plot_histogram(res_full.get_counts())
###Output
_____no_output_____
###Markdown
Trusted Notebook" width="500 px" align="left"> Working with reduced backends and noise models.In this tutorial we will see how to target a sub-graph of a quantum device by reducing the coupling map from a device backend and performing noise simulations on this sub-graph by reducing the noise model for the backend.
###Code
from qiskit import *
from qiskit.compiler import transpile
from qiskit.transpiler.coupling import CouplingMap
from qiskit.tools.monitor import job_monitor
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Load IBMQ accounts and grab target device backend
###Code
provider = IBMQ.load_account()
backend = provider.get_backend('ibmq_16_melbourne')
print('Device has:', backend.configuration().n_qubits, 'qubits')
###Output
Device has: 14 qubits
###Markdown
Problem setupSuppose we have a quantum circuit that we want to run on a given quantum device that is smaller than the full size of the device, and which we want to map to a selected subset of qubits in the device layout. As an example, lets consider the standard Bell state circuit:
###Code
q = QuantumRegister(2)
c = ClassicalRegister(2)
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.cx(q[0], q[1])
qc.measure(q, c);
###Output
_____no_output_____
###Markdown
that we wish to run on qubits 11 and 12 of the target device. We will call these qubits the `target_qubits`:
###Code
target_qubits = [11, 12]
###Output
_____no_output_____
###Markdown
Standard transpilationLets transpile the circuit for the given backend, making sure to map the circuit to the `target_qubits`:
###Code
new_qc = transpile(qc, backend, initial_layout=target_qubits)
new_qc.draw()
###Output
_____no_output_____
###Markdown
We can see that the circuit gets transformed so that the number of qubits in the circuit is equal to the number of qubits in the device, and the gates did indeed get mapped to the `target_qubits`. The above example also highlights one of the downsides of transpiling circuits that are much smaller than the full size of the device; The transpiled circuit is padded with qubits that play no role in computation. As such, it may be of interest to extract the sub-graph of the device defined by the `target_qubits` and use that in the transpilation process. Reducing the device coupling mapA reduced coupling map is a map that takes the sub-graph of a device defined by `target_qubits` and returns the same sub-graph coupling map, but with qubits labeled from 0->(N-1) where N is the number of qubits in `target_qubits`. First we need to get the device coupling map and create a `CouplingMap` object from it.
###Code
cmap = backend.configuration().coupling_map
CMAP = CouplingMap(cmap)
print(CMAP)
###Output
[[1, 0], [1, 2], [2, 3], [4, 3], [4, 10], [5, 4], [5, 6], [5, 9], [6, 8], [9, 8], [9, 10], [7, 8], [11, 3], [11, 10], [11, 12], [12, 2], [13, 1], [13, 12]]
###Markdown
To reduce the coupling map defined by `target_qubits` we make use of the `CouplingMap.reduce()` method, and pass `target_qubits` to it.
###Code
red_cmap = CMAP.reduce(target_qubits)
print(red_cmap)
###Output
[[0, 1]]
###Markdown
We are returned a `CouplingMap` instance that shows the controlled-x gate connectivity between the `target_qubits` with the indices of the qubits reduced. This can now be passed to the transpile function to generate a new circuit that has the same number of qubits as the original circuit.
###Code
red_qc = transpile(qc, None, coupling_map=red_cmap)
red_qc.draw()
###Output
_____no_output_____
###Markdown
Executing a circuit transpiled with a reduced coupling map on a deviceTo `execute` a circuit compiled with a reduced layout on the actual device, one simply passes the `target_qubits` list as the `initial_layout`
###Code
job_device = execute(red_qc, backend, initial_layout=target_qubits)
job_monitor(job_device)
plot_histogram(job_device.result().get_counts())
###Output
_____no_output_____
###Markdown
Noise modelling using reduced noise modelsWe have just seen how to take the `coupling_map` of a device, reduce a given sub-graph of that device, and use it in the transpiling process and then execution on the device. We will now show how to do a noisy simulation using the reduced circuit and a noise model that is also reduced in size. The combination of reduced coupling and noise maps allows us to easily simulate sub-graphs of large, e.g. >50 qubit, devices in the presence of noise.First we need to import the noise library from the Aer provider
###Code
from qiskit.providers.aer import noise
###Output
_____no_output_____
###Markdown
Next we build a simplified noise model of the full device using data from the backend parameters:
###Code
properties = backend.properties()
noise_model = noise.device.basic_device_noise_model(properties)
basis_gates = noise_model.basis_gates
noise_model
###Output
_____no_output_____
###Markdown
Now we reduce the full noise model down using `remapping=target_qubits`, and `discard_qubits=True`. This latter keyword argument trims the noise model down to just those noise terms that correspond to the reduced target qubits.
###Code
red_noise_model = noise.utils.remap_noise_model(noise_model,
remapping=target_qubits,
discard_qubits=True)
red_noise_model
###Output
_____no_output_____
###Markdown
Verify reduced circuit and noise model give same results as full exampleTo show that the reduced and full noise models give the same answer we fix the random number generator in the simulator using the `seed_simulator` keyword argument.
###Code
sim_backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
Reduced model
###Code
job = execute(red_qc, sim_backend,
basis_gates = basis_gates,
noise_model=red_noise_model,
seed_simulator=123456)
res = job.result()
plot_histogram(res.get_counts())
###Output
_____no_output_____
###Markdown
Full model
###Code
job_full = execute(new_qc, sim_backend,
basis_gates = basis_gates,
noise_model=noise_model,
seed_simulator=123456)
res_full = job_full.result()
plot_histogram(res_full.get_counts())
###Output
_____no_output_____ |
03_classification v1.ipynb | ###Markdown
**Chapter 3 – Classification**_This notebook contains all the sample code and solutions to the exercises in chapter 3._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
###Code
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "D:\handson-ml\\"
CHAPTER_ID = "classification\\"
def save_fig(fig_id, tight_layout=True):
# path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
path1 = PROJECT_ROOT_DIR + "images\\" + CHAPTER_ID + fig_id + ".png"
print("Saving figure: ", fig_id)
if tight_layout:
plt.tight_layout()
print (path1)
plt.savefig(path1, format='png', dpi=300)
###Output
_____no_output_____
###Markdown
MNIST **Warning**: `fetch_mldata()` is deprecated since Scikit-Learn 0.20. You should use `fetch_openml()` instead. However, it returns the unsorted MNIST dataset, whereas `fetch_mldata()` returned the dataset sorted by target (the training set and the test test were sorted separately). In general, this is fine, but if you want to get the exact same results as before, you need to sort the dataset using the following function:
###Code
def sort_by_target(mnist):
reorder_train = np.array(sorted([(target, i) for i, target in enumerate(mnist.target[:60000])]))[:, 1]
reorder_test = np.array(sorted([(target, i) for i, target in enumerate(mnist.target[60000:])]))[:, 1]
mnist.data[:60000] = mnist.data[reorder_train]
mnist.target[:60000] = mnist.target[reorder_train]
mnist.data[60000:] = mnist.data[reorder_test + 60000]
mnist.target[60000:] = mnist.target[reorder_test + 60000]
try:
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, cache=True)
mnist.target = mnist.target.astype(np.int8) # fetch_openml() returns targets as strings
sort_by_target(mnist) # fetch_openml() returns an unsorted dataset
except ImportError:
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
mnist["data"], mnist["target"]
mnist.data.shape
X, y = mnist["data"], mnist["target"]
X.shape
y.shape
28*28
some_digit = X[36000]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap = mpl.cm.binary,
interpolation="nearest")
plt.axis("off")
save_fig("some_digit_plot")
plt.show()
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.binary,
interpolation="nearest")
plt.axis("off")
# EXTRA
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = mpl.cm.binary, **options)
plt.axis("off")
plt.figure(figsize=(9,9))
example_images = np.r_[X[:12000:600], X[13000:30600:600], X[30600:60000:590]]
plot_digits(example_images, images_per_row=10)
save_fig("more_digits_plot")
plt.show()
y[36000]
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
import numpy as np
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
###Output
_____no_output_____
###Markdown
Binary classifier
###Code
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
###Output
_____no_output_____
###Markdown
**Note**: a few hyperparameters will have a different default value in future versions of Scikit-Learn, so a warning is issued if you do not set them explicitly. This is why we set `max_iter=5` and `tol=-np.infty`, to get the same results as in the book, while avoiding the warnings.
###Code
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(max_iter=5, tol=-np.infty, random_state=42)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([some_digit])
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, random_state=42)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = (y_train_5[train_index])
X_test_fold = X_train[test_index]
y_test_fold = (y_train_5[test_index])
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
y_train_perfect_predictions = y_train_5
confusion_matrix(y_train_5, y_train_perfect_predictions)
from sklearn.metrics import precision_score, recall_score
precision_score(y_train_5, y_train_pred)
4344 / (4344 + 1307)
recall_score(y_train_5, y_train_pred)
4344 / (4344 + 1077)
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
4344 / (4344 + (1077 + 1307)/2)
y_scores = sgd_clf.decision_function([some_digit])
y_scores
threshold = 0
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
threshold = 200000
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,
method="decision_function")
###Output
_____no_output_____
###Markdown
Note: there was an [issue](https://github.com/scikit-learn/scikit-learn/issues/9589) in Scikit-Learn 0.19.0 (fixed in 0.19.1) where the result of `cross_val_predict()` was incorrect in the binary classification case when using `method="decision_function"`, as in the code above. The resulting array had an extra first dimension full of 0s. Just in case you are using 0.19.0, we need to add this small hack to work around this issue:
###Code
y_scores.shape
# hack to work around issue #9589 in Scikit-Learn 0.19.0
if y_scores.ndim == 2:
y_scores = y_scores[:, 1]
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2)
plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2)
plt.xlabel("Threshold", fontsize=16)
plt.legend(loc="upper left", fontsize=16)
plt.ylim([0, 1])
plt.figure(figsize=(8, 4))
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.xlim([-700000, 700000])
save_fig("precision_recall_vs_threshold_plot")
plt.show()
(y_train_pred == (y_scores > 0)).all()
y_train_pred_90 = (y_scores > 70000)
precision_score(y_train_5, y_train_pred_90)
recall_score(y_train_5, y_train_pred_90)
def plot_precision_vs_recall(precisions, recalls):
plt.plot(recalls, precisions, "b-", linewidth=2)
plt.xlabel("Recall", fontsize=16)
plt.ylabel("Precision", fontsize=16)
plt.axis([0, 1, 0, 1])
plt.figure(figsize=(8, 6))
plot_precision_vs_recall(precisions, recalls)
save_fig("precision_vs_recall_plot")
plt.show()
###Output
Saving figure: precision_vs_recall_plot
D:\handson-ml\images\classification\precision_vs_recall_plot.png
###Markdown
ROC curves
###Code
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr, tpr)
save_fig("roc_curve_plot")
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
###Output
_____no_output_____
###Markdown
**Note**: we set `n_estimators=10` to avoid a warning about the fact that its default value will be set to 100 in Scikit-Learn 0.22.
###Code
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(n_estimators=10, random_state=42)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,
method="predict_proba")
y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest)
plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, "b:", linewidth=2, label="SGD")
plot_roc_curve(fpr_forest, tpr_forest, "Random Forest")
plt.legend(loc="lower right", fontsize=16)
save_fig("roc_curve_comparison_plot")
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)
precision_score(y_train_5, y_train_pred_forest)
recall_score(y_train_5, y_train_pred_forest)
###Output
_____no_output_____
###Markdown
Multiclass classification
###Code
sgd_clf.fit(X_train, y_train)
sgd_clf.predict([some_digit])
some_digit_scores = sgd_clf.decision_function([some_digit])
some_digit_scores
np.argmax(some_digit_scores)
sgd_clf.classes_
sgd_clf.classes_[5]
from sklearn.multiclass import OneVsOneClassifier
ovo_clf = OneVsOneClassifier(SGDClassifier(max_iter=5, tol=-np.infty, random_state=42))
ovo_clf.fit(X_train, y_train)
ovo_clf.predict([some_digit])
len(ovo_clf.estimators_)
forest_clf.fit(X_train, y_train)
forest_clf.predict([some_digit])
forest_clf.predict_proba([some_digit])
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy")
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring="accuracy")
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
def plot_confusion_matrix(matrix):
"""If you prefer color and a colorbar"""
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
cax = ax.matshow(matrix)
fig.colorbar(cax)
plt.matshow(conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_plot", tight_layout=False)
plt.show()
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_errors_plot", tight_layout=False)
plt.show()
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
save_fig("error_analysis_digits_plot")
plt.show()
###Output
Saving figure: error_analysis_digits_plot
D:\handson-ml\images\classification\error_analysis_digits_plot.png
###Markdown
Multilabel classification
###Code
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit])
###Output
_____no_output_____
###Markdown
**Warning**: the following cell may take a very long time (possibly hours depending on your hardware).
###Code
y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3, n_jobs=-1)
f1_score(y_multilabel, y_train_knn_pred, average="macro")
###Output
_____no_output_____
###Markdown
Multioutput classification
###Code
noise = np.random.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = np.random.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
some_index = 5500
plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index])
save_fig("noisy_digit_example_plot")
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)
save_fig("cleaned_digit_example_plot")
###Output
Saving figure: cleaned_digit_example_plot
D:\handson-ml\images\classification\cleaned_digit_example_plot.png
###Markdown
Extra material Dummy (ie. random) classifier
###Code
from sklearn.dummy import DummyClassifier
dmy_clf = DummyClassifier()
y_probas_dmy = cross_val_predict(dmy_clf, X_train, y_train_5, cv=3, method="predict_proba")
y_scores_dmy = y_probas_dmy[:, 1]
fprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dmy)
plot_roc_curve(fprr, tprr)
###Output
_____no_output_____
###Markdown
KNN classifier
###Code
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_jobs=-1, weights='distance', n_neighbors=4)
knn_clf.fit(X_train, y_train)
y_knn_pred = knn_clf.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_knn_pred)
from scipy.ndimage.interpolation import shift
def shift_digit(digit_array, dx, dy, new=0):
return shift(digit_array.reshape(28, 28), [dy, dx], cval=new).reshape(784)
plot_digit(shift_digit(some_digit, 5, 1, new=100))
X_train_expanded = [X_train]
y_train_expanded = [y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
shifted_images = np.apply_along_axis(shift_digit, axis=1, arr=X_train, dx=dx, dy=dy)
X_train_expanded.append(shifted_images)
y_train_expanded.append(y_train)
X_train_expanded = np.concatenate(X_train_expanded)
y_train_expanded = np.concatenate(y_train_expanded)
X_train_expanded.shape, y_train_expanded.shape
knn_clf.fit(X_train_expanded, y_train_expanded)
y_knn_expanded_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_knn_expanded_pred)
ambiguous_digit = X_test[2589]
knn_clf.predict_proba([ambiguous_digit])
plot_digit(ambiguous_digit)
###Output
_____no_output_____
###Markdown
Exercise solutions 1. An MNIST Classifier With Over 97% Accuracy **Warning**: the next cell may take hours to run, depending on your hardware.
###Code
from sklearn.model_selection import GridSearchCV
param_grid = [{'weights': ["uniform", "distance"], 'n_neighbors': [3, 4, 5]}]
knn_clf = KNeighborsClassifier()
grid_search = GridSearchCV(knn_clf, param_grid, cv=5, verbose=3, n_jobs=-1)
grid_search.fit(X_train, y_train)
grid_search.best_params_
grid_search.best_score_
from sklearn.metrics import accuracy_score
y_pred = grid_search.predict(X_test)
accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
2. Data Augmentation
###Code
from scipy.ndimage.interpolation import shift
def shift_image(image, dx, dy):
image = image.reshape((28, 28))
shifted_image = shift(image, [dy, dx], cval=0, mode="constant")
return shifted_image.reshape([-1])
image = X_train[1000]
shifted_image_down = shift_image(image, 0, 5)
shifted_image_left = shift_image(image, -5, 0)
plt.figure(figsize=(12,3))
plt.subplot(131)
plt.title("Original", fontsize=14)
plt.imshow(image.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.subplot(132)
plt.title("Shifted down", fontsize=14)
plt.imshow(shifted_image_down.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.subplot(133)
plt.title("Shifted left", fontsize=14)
plt.imshow(shifted_image_left.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.show()
X_train_augmented = [image for image in X_train]
y_train_augmented = [label for label in y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
for image, label in zip(X_train, y_train):
X_train_augmented.append(shift_image(image, dx, dy))
y_train_augmented.append(label)
X_train_augmented = np.array(X_train_augmented)
y_train_augmented = np.array(y_train_augmented)
shuffle_idx = np.random.permutation(len(X_train_augmented))
X_train_augmented = X_train_augmented[shuffle_idx]
y_train_augmented = y_train_augmented[shuffle_idx]
knn_clf = KNeighborsClassifier(**grid_search.best_params_)
knn_clf.fit(X_train_augmented, y_train_augmented)
y_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
By simply augmenting the data, we got a 0.5% accuracy boost. :) 3. Tackle the Titanic dataset The goal is to predict whether or not a passenger survived based on attributes such as their age, sex, passenger class, where they embarked and so on. First, login to [Kaggle](https://www.kaggle.com/) and go to the [Titanic challenge](https://www.kaggle.com/c/titanic) to download `train.csv` and `test.csv`. Save them to the `datasets/titanic` directory. Next, let's load the data:
###Code
import os
TITANIC_PATH = os.path.join("datasets", "titanic")
import pandas as pd
def load_titanic_data(filename, titanic_path=TITANIC_PATH):
csv_path = os.path.join(titanic_path, filename)
return pd.read_csv(csv_path)
train_data = load_titanic_data("train.csv")
test_data = load_titanic_data("test.csv")
###Output
_____no_output_____
###Markdown
The data is already split into a training set and a test set. However, the test data does *not* contain the labels: your goal is to train the best model you can using the training data, then make your predictions on the test data and upload them to Kaggle to see your final score. Let's take a peek at the top few rows of the training set:
###Code
train_data.head()
###Output
_____no_output_____
###Markdown
The attributes have the following meaning:* **Survived**: that's the target, 0 means the passenger did not survive, while 1 means he/she survived.* **Pclass**: passenger class.* **Name**, **Sex**, **Age**: self-explanatory* **SibSp**: how many siblings & spouses of the passenger aboard the Titanic.* **Parch**: how many children & parents of the passenger aboard the Titanic.* **Ticket**: ticket id* **Fare**: price paid (in pounds)* **Cabin**: passenger's cabin number* **Embarked**: where the passenger embarked the Titanic Let's get more info to see how much data is missing:
###Code
train_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.6+ KB
###Markdown
Okay, the **Age**, **Cabin** and **Embarked** attributes are sometimes null (less than 891 non-null), especially the **Cabin** (77% are null). We will ignore the **Cabin** for now and focus on the rest. The **Age** attribute has about 19% null values, so we will need to decide what to do with them. Replacing null values with the median age seems reasonable. The **Name** and **Ticket** attributes may have some value, but they will be a bit tricky to convert into useful numbers that a model can consume. So for now, we will ignore them. Let's take a look at the numerical attributes:
###Code
train_data.describe()
###Output
_____no_output_____
###Markdown
* Yikes, only 38% **Survived**. :( That's close enough to 40%, so accuracy will be a reasonable metric to evaluate our model.* The mean **Fare** was £32.20, which does not seem so expensive (but it was probably a lot of money back then).* The mean **Age** was less than 30 years old. Let's check that the target is indeed 0 or 1:
###Code
train_data["Survived"].value_counts()
###Output
_____no_output_____
###Markdown
Now let's take a quick look at all the categorical attributes:
###Code
train_data["Pclass"].value_counts()
train_data["Sex"].value_counts()
train_data["Embarked"].value_counts()
###Output
_____no_output_____
###Markdown
The Embarked attribute tells us where the passenger embarked: C=Cherbourg, Q=Queenstown, S=Southampton. Now let's build our preprocessing pipelines. We will reuse the `DataframeSelector` we built in the previous chapter to select specific attributes from the `DataFrame`:
###Code
from sklearn.base import BaseEstimator, TransformerMixin
# A class to select numerical or categorical columns
# since Scikit-Learn doesn't handle DataFrames yet
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names]
###Output
_____no_output_____
###Markdown
Let's build the pipeline for the numerical attributes:**Warning**: Since Scikit-Learn 0.20, the `sklearn.preprocessing.Imputer` class was replaced by the `sklearn.impute.SimpleImputer` class.
###Code
from sklearn.pipeline import Pipeline
try:
from sklearn.impute import SimpleImputer # Scikit-Learn 0.20+
except ImportError:
from sklearn.preprocessing import Imputer as SimpleImputer
num_pipeline = Pipeline([
("select_numeric", DataFrameSelector(["Age", "SibSp", "Parch", "Fare"])),
("imputer", SimpleImputer(strategy="median")),
])
num_pipeline.fit_transform(train_data)
###Output
_____no_output_____
###Markdown
We will also need an imputer for the string categorical columns (the regular `SimpleImputer` does not work on those):
###Code
# Inspired from stackoverflow.com/questions/25239958
class MostFrequentImputer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
self.most_frequent_ = pd.Series([X[c].value_counts().index[0] for c in X],
index=X.columns)
return self
def transform(self, X, y=None):
return X.fillna(self.most_frequent_)
###Output
_____no_output_____
###Markdown
**Warning**: earlier versions of the book used the `LabelBinarizer` or `CategoricalEncoder` classes to convert each categorical value to a one-hot vector. It is now preferable to use the `OneHotEncoder` class. Since Scikit-Learn 0.20 it can handle string categorical inputs (see [PR 10521](https://github.com/scikit-learn/scikit-learn/issues/10521)), not just integer categorical inputs. If you are using an older version of Scikit-Learn, you can import the new version from `future_encoders.py`:
###Code
try:
from sklearn.preprocessing import OrdinalEncoder # just to raise an ImportError if Scikit-Learn < 0.20
from sklearn.preprocessing import OneHotEncoder
except ImportError:
from future_encoders import OneHotEncoder # Scikit-Learn < 0.20
###Output
_____no_output_____
###Markdown
Now we can build the pipeline for the categorical attributes:
###Code
cat_pipeline = Pipeline([
("select_cat", DataFrameSelector(["Pclass", "Sex", "Embarked"])),
("imputer", MostFrequentImputer()),
("cat_encoder", OneHotEncoder(sparse=False)),
])
cat_pipeline.fit_transform(train_data)
###Output
_____no_output_____
###Markdown
Finally, let's join the numerical and categorical pipelines:
###Code
from sklearn.pipeline import FeatureUnion
preprocess_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
###Output
_____no_output_____
###Markdown
Cool! Now we have a nice preprocessing pipeline that takes the raw data and outputs numerical input features that we can feed to any Machine Learning model we want.
###Code
X_train = preprocess_pipeline.fit_transform(train_data)
X_train
###Output
_____no_output_____
###Markdown
Let's not forget to get the labels:
###Code
y_train = train_data["Survived"]
###Output
_____no_output_____
###Markdown
We are now ready to train a classifier. Let's start with an `SVC`:
###Code
from sklearn.svm import SVC
svm_clf = SVC(gamma="auto")
svm_clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Great, our model is trained, let's use it to make predictions on the test set:
###Code
X_test = preprocess_pipeline.transform(test_data)
y_pred = svm_clf.predict(X_test)
###Output
_____no_output_____
###Markdown
And now we could just build a CSV file with these predictions (respecting the format excepted by Kaggle), then upload it and hope for the best. But wait! We can do better than hope. Why don't we use cross-validation to have an idea of how good our model is?
###Code
from sklearn.model_selection import cross_val_score
svm_scores = cross_val_score(svm_clf, X_train, y_train, cv=10)
svm_scores.mean()
###Output
_____no_output_____
###Markdown
Okay, over 73% accuracy, clearly better than random chance, but it's not a great score. Looking at the [leaderboard](https://www.kaggle.com/c/titanic/leaderboard) for the Titanic competition on Kaggle, you can see that you need to reach above 80% accuracy to be within the top 10% Kagglers. Some reached 100%, but since you can easily find the [list of victims](https://www.encyclopedia-titanica.org/titanic-victims/) of the Titanic, it seems likely that there was little Machine Learning involved in their performance! ;-) So let's try to build a model that reaches 80% accuracy. Let's try a `RandomForestClassifier`:
###Code
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
forest_scores = cross_val_score(forest_clf, X_train, y_train, cv=10)
forest_scores.mean()
###Output
_____no_output_____
###Markdown
That's much better! Instead of just looking at the mean accuracy across the 10 cross-validation folds, let's plot all 10 scores for each model, along with a box plot highlighting the lower and upper quartiles, and "whiskers" showing the extent of the scores (thanks to Nevin Yilmaz for suggesting this visualization). Note that the `boxplot()` function detects outliers (called "fliers") and does not include them within the whiskers. Specifically, if the lower quartile is $Q_1$ and the upper quartile is $Q_3$, then the interquartile range $IQR = Q_3 - Q_1$ (this is the box's height), and any score lower than $Q_1 - 1.5 \times IQR$ is a flier, and so is any score greater than $Q3 + 1.5 \times IQR$.
###Code
plt.figure(figsize=(8, 4))
plt.plot([1]*10, svm_scores, ".")
plt.plot([2]*10, forest_scores, ".")
plt.boxplot([svm_scores, forest_scores], labels=("SVM","Random Forest"))
plt.ylabel("Accuracy", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
To improve this result further, you could:* Compare many more models and tune hyperparameters using cross validation and grid search,* Do more feature engineering, for example: * replace **SibSp** and **Parch** with their sum, * try to identify parts of names that correlate well with the **Survived** attribute (e.g. if the name contains "Countess", then survival seems more likely),* try to convert numerical attributes to categorical attributes: for example, different age groups had very different survival rates (see below), so it may help to create an age bucket category and use it instead of the age. Similarly, it may be useful to have a special category for people traveling alone since only 30% of them survived (see below).
###Code
train_data["AgeBucket"] = train_data["Age"] // 15 * 15
train_data[["AgeBucket", "Survived"]].groupby(['AgeBucket']).mean()
train_data["RelativesOnboard"] = train_data["SibSp"] + train_data["Parch"]
train_data[["RelativesOnboard", "Survived"]].groupby(['RelativesOnboard']).mean()
###Output
_____no_output_____
###Markdown
4. Spam classifier First, let's fetch the data:
###Code
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "http://spamassassin.apache.org/old/publiccorpus/"
HAM_URL = DOWNLOAD_ROOT + "20030228_easy_ham.tar.bz2"
SPAM_URL = DOWNLOAD_ROOT + "20030228_spam.tar.bz2"
SPAM_PATH = os.path.join("datasets", "spam")
def fetch_spam_data(spam_url=SPAM_URL, spam_path=SPAM_PATH):
if not os.path.isdir(spam_path):
os.makedirs(spam_path)
for filename, url in (("ham.tar.bz2", HAM_URL), ("spam.tar.bz2", SPAM_URL)):
path = os.path.join(spam_path, filename)
if not os.path.isfile(path):
urllib.request.urlretrieve(url, path)
tar_bz2_file = tarfile.open(path)
tar_bz2_file.extractall(path=SPAM_PATH)
tar_bz2_file.close()
fetch_spam_data()
###Output
_____no_output_____
###Markdown
Next, let's load all the emails:
###Code
HAM_DIR = os.path.join(SPAM_PATH, "easy_ham")
SPAM_DIR = os.path.join(SPAM_PATH, "spam")
ham_filenames = [name for name in sorted(os.listdir(HAM_DIR)) if len(name) > 20]
spam_filenames = [name for name in sorted(os.listdir(SPAM_DIR)) if len(name) > 20]
len(ham_filenames)
len(spam_filenames)
###Output
_____no_output_____
###Markdown
We can use Python's `email` module to parse these emails (this handles headers, encoding, and so on):
###Code
import email
import email.policy
def load_email(is_spam, filename, spam_path=SPAM_PATH):
directory = "spam" if is_spam else "easy_ham"
with open(os.path.join(spam_path, directory, filename), "rb") as f:
return email.parser.BytesParser(policy=email.policy.default).parse(f)
ham_emails = [load_email(is_spam=False, filename=name) for name in ham_filenames]
spam_emails = [load_email(is_spam=True, filename=name) for name in spam_filenames]
###Output
_____no_output_____
###Markdown
Let's look at one example of ham and one example of spam, to get a feel of what the data looks like:
###Code
print(ham_emails[1].get_content().strip())
print(spam_emails[6].get_content().strip())
###Output
Help wanted. We are a 14 year old fortune 500 company, that is
growing at a tremendous rate. We are looking for individuals who
want to work from home.
This is an opportunity to make an excellent income. No experience
is required. We will train you.
So if you are looking to be employed from home with a career that has
vast opportunities, then go:
http://www.basetel.com/wealthnow
We are looking for energetic and self motivated people. If that is you
than click on the link and fill out the form, and one of our
employement specialist will contact you.
To be removed from our link simple go to:
http://www.basetel.com/remove.html
4139vOLW7-758DoDY1425FRhM1-764SMFc8513fCsLl40
###Markdown
Some emails are actually multipart, with images and attachments (which can have their own attachments). Let's look at the various types of structures we have:
###Code
def get_email_structure(email):
if isinstance(email, str):
return email
payload = email.get_payload()
if isinstance(payload, list):
return "multipart({})".format(", ".join([
get_email_structure(sub_email)
for sub_email in payload
]))
else:
return email.get_content_type()
from collections import Counter
def structures_counter(emails):
structures = Counter()
for email in emails:
structure = get_email_structure(email)
structures[structure] += 1
return structures
structures_counter(ham_emails).most_common()
structures_counter(spam_emails).most_common()
###Output
_____no_output_____
###Markdown
It seems that the ham emails are more often plain text, while spam has quite a lot of HTML. Moreover, quite a few ham emails are signed using PGP, while no spam is. In short, it seems that the email structure is useful information to have. Now let's take a look at the email headers:
###Code
for header, value in spam_emails[0].items():
print(header,":",value)
###Output
Return-Path : <[email protected]>
Delivered-To : [email protected]
Received : from localhost (localhost [127.0.0.1]) by phobos.labs.spamassassin.taint.org (Postfix) with ESMTP id 136B943C32 for <zzzz@localhost>; Thu, 22 Aug 2002 08:17:21 -0400 (EDT)
Received : from mail.webnote.net [193.120.211.219] by localhost with POP3 (fetchmail-5.9.0) for zzzz@localhost (single-drop); Thu, 22 Aug 2002 13:17:21 +0100 (IST)
Received : from dd_it7 ([210.97.77.167]) by webnote.net (8.9.3/8.9.3) with ESMTP id NAA04623 for <[email protected]>; Thu, 22 Aug 2002 13:09:41 +0100
From : [email protected]
Received : from r-smtp.korea.com - 203.122.2.197 by dd_it7 with Microsoft SMTPSVC(5.5.1775.675.6); Sat, 24 Aug 2002 09:42:10 +0900
To : [email protected]
Subject : Life Insurance - Why Pay More?
Date : Wed, 21 Aug 2002 20:31:57 -1600
MIME-Version : 1.0
Message-ID : <0103c1042001882DD_IT7@dd_it7>
Content-Type : text/html; charset="iso-8859-1"
Content-Transfer-Encoding : quoted-printable
###Markdown
There's probably a lot of useful information in there, such as the sender's email address ([email protected] looks fishy), but we will just focus on the `Subject` header:
###Code
spam_emails[0]["Subject"]
###Output
_____no_output_____
###Markdown
Okay, before we learn too much about the data, let's not forget to split it into a training set and a test set:
###Code
import numpy as np
from sklearn.model_selection import train_test_split
X = np.array(ham_emails + spam_emails)
y = np.array([0] * len(ham_emails) + [1] * len(spam_emails))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
Okay, let's start writing the preprocessing functions. First, we will need a function to convert HTML to plain text. Arguably the best way to do this would be to use the great [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) library, but I would like to avoid adding another dependency to this project, so let's hack a quick & dirty solution using regular expressions (at the risk of [un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment](https://stackoverflow.com/a/1732454/38626)). The following function first drops the `` section, then converts all `` tags to the word HYPERLINK, then it gets rid of all HTML tags, leaving only the plain text. For readability, it also replaces multiple newlines with single newlines, and finally it unescapes html entities (such as `>` or ` `):
###Code
import re
from html import unescape
def html_to_plain_text(html):
text = re.sub('<head.*?>.*?</head>', '', html, flags=re.M | re.S | re.I)
text = re.sub('<a\s.*?>', ' HYPERLINK ', text, flags=re.M | re.S | re.I)
text = re.sub('<.*?>', '', text, flags=re.M | re.S)
text = re.sub(r'(\s*\n)+', '\n', text, flags=re.M | re.S)
return unescape(text)
###Output
_____no_output_____
###Markdown
Let's see if it works. This is HTML spam:
###Code
html_spam_emails = [email for email in X_train[y_train==1]
if get_email_structure(email) == "text/html"]
sample_html_spam = html_spam_emails[7]
print(sample_html_spam.get_content().strip()[:1000], "...")
###Output
<HTML><HEAD><TITLE></TITLE><META http-equiv="Content-Type" content="text/html; charset=windows-1252"><STYLE>A:link {TEX-DECORATION: none}A:active {TEXT-DECORATION: none}A:visited {TEXT-DECORATION: none}A:hover {COLOR: #0033ff; TEXT-DECORATION: underline}</STYLE><META content="MSHTML 6.00.2713.1100" name="GENERATOR"></HEAD>
<BODY text="#000000" vLink="#0033ff" link="#0033ff" bgColor="#CCCC99"><TABLE borderColor="#660000" cellSpacing="0" cellPadding="0" border="0" width="100%"><TR><TD bgColor="#CCCC99" valign="top" colspan="2" height="27">
<font size="6" face="Arial, Helvetica, sans-serif" color="#660000">
<b>OTC</b></font></TD></TR><TR><TD height="2" bgcolor="#6a694f">
<font size="5" face="Times New Roman, Times, serif" color="#FFFFFF">
<b> Newsletter</b></font></TD><TD height="2" bgcolor="#6a694f"><div align="right"><font color="#FFFFFF">
<b>Discover Tomorrow's Winners </b></font></div></TD></TR><TR><TD height="25" colspan="2" bgcolor="#CCCC99"><table width="100%" border="0" ...
###Markdown
And this is the resulting plain text:
###Code
print(html_to_plain_text(sample_html_spam.get_content())[:1000], "...")
###Output
OTC
Newsletter
Discover Tomorrow's Winners
For Immediate Release
Cal-Bay (Stock Symbol: CBYI)
Watch for analyst "Strong Buy Recommendations" and several advisory newsletters picking CBYI. CBYI has filed to be traded on the OTCBB, share prices historically INCREASE when companies get listed on this larger trading exchange. CBYI is trading around 25 cents and should skyrocket to $2.66 - $3.25 a share in the near future.
Put CBYI on your watch list, acquire a position TODAY.
REASONS TO INVEST IN CBYI
A profitable company and is on track to beat ALL earnings estimates!
One of the FASTEST growing distributors in environmental & safety equipment instruments.
Excellent management team, several EXCLUSIVE contracts. IMPRESSIVE client list including the U.S. Air Force, Anheuser-Busch, Chevron Refining and Mitsubishi Heavy Industries, GE-Energy & Environmental Research.
RAPIDLY GROWING INDUSTRY
Industry revenues exceed $900 million, estimates indicate that there could be as much as $25 billi ...
###Markdown
Great! Now let's write a function that takes an email as input and returns its content as plain text, whatever its format is:
###Code
def email_to_text(email):
html = None
for part in email.walk():
ctype = part.get_content_type()
if not ctype in ("text/plain", "text/html"):
continue
try:
content = part.get_content()
except: # in case of encoding issues
content = str(part.get_payload())
if ctype == "text/plain":
return content
else:
html = content
if html:
return html_to_plain_text(html)
print(email_to_text(sample_html_spam)[:100], "...")
###Output
OTC
Newsletter
Discover Tomorrow's Winners
For Immediate Release
Cal-Bay (Stock Symbol: CBYI)
Wat ...
###Markdown
Let's throw in some stemming! For this to work, you need to install the Natural Language Toolkit ([NLTK](http://www.nltk.org/)). It's as simple as running the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the `--user` option):`$ pip3 install nltk`
###Code
try:
import nltk
stemmer = nltk.PorterStemmer()
for word in ("Computations", "Computation", "Computing", "Computed", "Compute", "Compulsive"):
print(word, "=>", stemmer.stem(word))
except ImportError:
print("Error: stemming requires the NLTK module.")
stemmer = None
###Output
Computations => comput
Computation => comput
Computing => comput
Computed => comput
Compute => comput
Compulsive => compuls
###Markdown
We will also need a way to replace URLs with the word "URL". For this, we could use hard core [regular expressions](https://mathiasbynens.be/demo/url-regex) but we will just use the [urlextract](https://github.com/lipoja/URLExtract) library. You can install it with the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the `--user` option):`$ pip3 install urlextract`
###Code
try:
import urlextract # may require an Internet connection to download root domain names
url_extractor = urlextract.URLExtract()
print(url_extractor.find_urls("Will it detect github.com and https://youtu.be/7Pq-S557XQU?t=3m32s"))
except ImportError:
print("Error: replacing URLs requires the urlextract module.")
url_extractor = None
###Output
['github.com', 'https://youtu.be/7Pq-S557XQU?t=3m32s']
###Markdown
We are ready to put all this together into a transformer that we will use to convert emails to word counters. Note that we split sentences into words using Python's `split()` method, which uses whitespaces for word boundaries. This works for many written languages, but not all. For example, Chinese and Japanese scripts generally don't use spaces between words, and Vietnamese often uses spaces even between syllables. It's okay in this exercise, because the dataset is (mostly) in English.
###Code
from sklearn.base import BaseEstimator, TransformerMixin
class EmailToWordCounterTransformer(BaseEstimator, TransformerMixin):
def __init__(self, strip_headers=True, lower_case=True, remove_punctuation=True,
replace_urls=True, replace_numbers=True, stemming=True):
self.strip_headers = strip_headers
self.lower_case = lower_case
self.remove_punctuation = remove_punctuation
self.replace_urls = replace_urls
self.replace_numbers = replace_numbers
self.stemming = stemming
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
X_transformed = []
for email in X:
text = email_to_text(email) or ""
if self.lower_case:
text = text.lower()
if self.replace_urls and url_extractor is not None:
urls = list(set(url_extractor.find_urls(text)))
urls.sort(key=lambda url: len(url), reverse=True)
for url in urls:
text = text.replace(url, " URL ")
if self.replace_numbers:
text = re.sub(r'\d+(?:\.\d*(?:[eE]\d+))?', 'NUMBER', text)
if self.remove_punctuation:
text = re.sub(r'\W+', ' ', text, flags=re.M)
word_counts = Counter(text.split())
if self.stemming and stemmer is not None:
stemmed_word_counts = Counter()
for word, count in word_counts.items():
stemmed_word = stemmer.stem(word)
stemmed_word_counts[stemmed_word] += count
word_counts = stemmed_word_counts
X_transformed.append(word_counts)
return np.array(X_transformed)
###Output
_____no_output_____
###Markdown
Let's try this transformer on a few emails:
###Code
X_few = X_train[:3]
X_few_wordcounts = EmailToWordCounterTransformer().fit_transform(X_few)
X_few_wordcounts
###Output
_____no_output_____
###Markdown
This looks about right! Now we have the word counts, and we need to convert them to vectors. For this, we will build another transformer whose `fit()` method will build the vocabulary (an ordered list of the most common words) and whose `transform()` method will use the vocabulary to convert word counts to vectors. The output is a sparse matrix.
###Code
from scipy.sparse import csr_matrix
class WordCounterToVectorTransformer(BaseEstimator, TransformerMixin):
def __init__(self, vocabulary_size=1000):
self.vocabulary_size = vocabulary_size
def fit(self, X, y=None):
total_count = Counter()
for word_count in X:
for word, count in word_count.items():
total_count[word] += min(count, 10)
most_common = total_count.most_common()[:self.vocabulary_size]
self.most_common_ = most_common
self.vocabulary_ = {word: index + 1 for index, (word, count) in enumerate(most_common)}
return self
def transform(self, X, y=None):
rows = []
cols = []
data = []
for row, word_count in enumerate(X):
for word, count in word_count.items():
rows.append(row)
cols.append(self.vocabulary_.get(word, 0))
data.append(count)
return csr_matrix((data, (rows, cols)), shape=(len(X), self.vocabulary_size + 1))
vocab_transformer = WordCounterToVectorTransformer(vocabulary_size=10)
X_few_vectors = vocab_transformer.fit_transform(X_few_wordcounts)
X_few_vectors
X_few_vectors.toarray()
###Output
_____no_output_____
###Markdown
What does this matrix mean? Well, the 64 in the third row, first column, means that the third email contains 64 words that are not part of the vocabulary. The 1 next to it means that the first word in the vocabulary is present once in this email. The 2 next to it means that the second word is present twice, and so on. You can look at the vocabulary to know which words we are talking about. The first word is "of", the second word is "and", etc.
###Code
vocab_transformer.vocabulary_
###Output
_____no_output_____
###Markdown
We are now ready to train our first spam classifier! Let's transform the whole dataset:
###Code
from sklearn.pipeline import Pipeline
preprocess_pipeline = Pipeline([
("email_to_wordcount", EmailToWordCounterTransformer()),
("wordcount_to_vector", WordCounterToVectorTransformer()),
])
X_train_transformed = preprocess_pipeline.fit_transform(X_train)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
log_clf = LogisticRegression(solver="liblinear", random_state=42)
score = cross_val_score(log_clf, X_train_transformed, y_train, cv=3, verbose=3)
score.mean()
###Output
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
###Markdown
Over 98.7%, not bad for a first try! :) However, remember that we are using the "easy" dataset. You can try with the harder datasets, the results won't be so amazing. You would have to try multiple models, select the best ones and fine-tune them using cross-validation, and so on.But you get the picture, so let's stop now, and just print out the precision/recall we get on the test set:
###Code
from sklearn.metrics import precision_score, recall_score
X_test_transformed = preprocess_pipeline.transform(X_test)
log_clf = LogisticRegression(solver="liblinear", random_state=42)
log_clf.fit(X_train_transformed, y_train)
y_pred = log_clf.predict(X_test_transformed)
print("Precision: {:.2f}%".format(100 * precision_score(y_test, y_pred)))
print("Recall: {:.2f}%".format(100 * recall_score(y_test, y_pred)))
###Output
Precision: 94.90%
Recall: 97.89%
|
Time Series LSTM VIX.ipynb | ###Markdown
the adj close are quite erratic.
###Code
import matplotlib.pyplot as plt
import numpy as np
import time
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.recurrent import LSTM
from keras.models import Sequential
def convertSeriesToMatrix(vectorSeries, sequence_length):
matrix=[]
for i in range(len(vectorSeries)-sequence_length+1):
matrix.append(vectorSeries[i:i+sequence_length])
return matrix
np.random.seed(2019)
path_to_dataset = 'vix_2011_2019.csv'
sequence_length = 20
# vector to store the time series
vector_vix = []
with open(path_to_dataset) as f:
next(f) # skip the header row
for line in f:
fields = line.split(',')
vector_vix.append(float(fields[6]))
# convert the vector to a 2D matrix
matrix_vix = convertSeriesToMatrix(vector_vix, sequence_length)
# shift all data by mean
matrix_vix = np.array(matrix_vix)
shifted_value = matrix_vix.mean()
matrix_vix -= shifted_value
print("Data shape: ", matrix_vix.shape)
# split dataset: 90% for training and 10% for testing
train_row = int(round(0.9 * matrix_vix.shape[0]))
train_set = matrix_vix[:train_row, :]
# shuffle the training set (but do not shuffle the test set)
np.random.shuffle(train_set)
# the training set
X_train = train_set[:, :-1]
# the last column is the true value to compute the mean-squared-error loss
y_train = train_set[:, -1]
# the test set
X_test = matrix_vix[train_row:, :-1]
y_test = matrix_vix[train_row:, -1]
# the input to LSTM layer needs to have the shape of (number of samples, the dimension of each element)
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
print(X_train.shape)
print(X_test.shape)
model = Sequential()
# layer 1: LSTM
model.add(LSTM( input_dim=1, output_dim=50, return_sequences=True))
model.add(Dropout(0.2))
# layer 2: LSTM
model.add(LSTM(output_dim=100, return_sequences=False))
model.add(Dropout(0.2))
# layer 3: dense
# linear activation: a(x) = x
model.add(Dense(output_dim=1, activation='linear'))
# compile the model
model.compile(loss="mse", optimizer="rmsprop")
model.fit(X_train, y_train, batch_size=512, epochs=50, validation_split=0.05, verbose=1)
# evaluate the result
test_mse = model.evaluate(X_test, y_test, verbose=1)
print('The mean squared error (MSE) on the test data set is %.3f over %d test samples.' % (test_mse, len(y_test)))
# get the predicted values
predicted_values = model.predict(X_test)
num_test_samples = len(predicted_values)
predicted_values = np.reshape(predicted_values, (num_test_samples,1))
# plot the results
fig = plt.figure(figsize=(10,6))
plt.plot(y_test + shifted_value)
plt.plot(predicted_values + shifted_value)
plt.xlabel('Date')
plt.ylabel('VIX')
plt.show();
###Output
_____no_output_____ |
asgjove/asg1/Drive_DFA.ipynb | ###Markdown
**Here is how we will represent a DFA in Python (taking Figure 3.4's example from the book). You can clearly see how the traits of the DFA are encoded. We prefer a Python dictionary, as it supports a number of convenient operations, and also one can add additional fields easily. **
###Code
DFA_fig34 = { 'Q': {'A', 'IF', 'B'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B',
('A', '1'): 'A',
('B', '0'): 'IF',
('B', '1'): 'B' },
'q0': 'IF',
'F': {'IF'}
}
###Output
_____no_output_____
###Markdown
**We can now write routines to print DFA using dot. The main routines are listed below.** * dot_dfa_w_bh : lists all states of a DFA including black-hole states* dot_dfa : lists all isNotBH states (see below for a defn), i.e. suppress black-holes - Usually there are too many transitions to them and that clutters the view ======
###Code
# Some tests pertaining to totalize_dfa, is_consistent_dfa, etc
DFA_fig34 = { 'Q': {'A', 'IF', 'B'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B',
('A', '1'): 'A',
('B', '0'): 'IF',
('B', '1'): 'B' },
'q0': 'IF',
'F': {'IF'}
}
def tests_dfa_consist():
"""Some tests wrt DFA routines.
"""
DFA_fig34_Q = DFA_fig34["Q"]
DFA_fig34_Sigma = DFA_fig34["Sigma"]
randQ = random.choice(list(DFA_fig34_Q))
randSym = random.choice(list(DFA_fig34_Sigma))
DFA_fig34_deepcopy = copy.deepcopy(DFA_fig34)
print('is_consistent_dfa(DFA_fig34) =',
is_consistent_dfa(DFA_fig34) )
print('Removing mapping for ' +
"(" + randQ + "," + randSym + ")" +
"from DFA_fig34_deepcopy")
DFA_fig34_deepcopy["Delta"].pop((randQ,randSym))
print('is_consistent_dfa(DFA_fig34_deepcopy) =',
is_consistent_dfa(DFA_fig34_deepcopy) )
totalized = totalize_dfa(DFA_fig34_deepcopy)
print ( 'is_consistent_dfa(totalized) =',
is_consistent_dfa(totalized) )
assert(totalized == totalize_dfa(totalized)) # Must pass
dfaBESame = md2mc('''
DFA !! Begins and ends with same; epsilon allowed
IF : 0 -> F0
IF : 1 -> F1
!!
F0 : 0 -> F0
F0 : 1 -> S01
S01 : 1 -> S01
S01 : 0 -> F0
!!
F1 : 1 -> F1
F1 : 0 -> S10
S10 : 0 -> S10
S10 : 1 -> F1
''')
DOdfaBESame = dotObj_dfa(dfaBESame)
DOdfaBESame
DOdfaBESame.source
###Output
_____no_output_____
###Markdown
Let us now administer some tests to print dot-strings generated.We will demonstrate two ways to print automata: 1. First generate a dot string via dot_dfa or dot_dfa_w_bh (calling the result "dot_string") 1. Then use the srcObj = Source(dot_string) call 2. Thereafter we can display the srcObj object directly into the browser 3. Or, one can also later convert the dot_string to svg or PDF2. OR, one can directly generate a dot object via the dotObj_dfa or dotObj_dfa_w_bh call (calling the result "dot_object") 1. Then directly display the dot_object 2. There are conversions available for dot_object to other formats too
###Code
DFA_fig34 = { 'Q': {'A', 'IF', 'B'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B',
('A', '1'): 'A',
('B', '0'): 'IF',
('B', '1'): 'B' },
'q0': 'IF',
'F': {'IF'}
}
def dfa_dot_tests():
"""Some dot-routine related tests.
"""
dot_string = dot_dfa(DFA_fig34)
dot_object1 = Source(dot_string)
return dot_object1.source
###Output
_____no_output_____
###Markdown
Let us test functions step_dfa, run_dfa, and accepts_dfa
###Code
# Some tests of step, run, etc.
DFA_fig34 = { 'Q': {'A', 'IF', 'B'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B',
('A', '1'): 'A',
('B', '0'): 'IF',
('B', '1'): 'B' },
'q0': 'IF',
'F': {'IF'}
}
def step_run_accepts_tests():
print("step_dfa(DFA_fig34, 'IF', '1') = ",
step_dfa(DFA_fig34, 'IF', '1'))
print("step_dfa(DFA_fig34, 'A', '0') = ",
step_dfa(DFA_fig34, 'A', '0'))
print("run_dfa(DFA_fig34, '101001') = ",
run_dfa(DFA_fig34, '101001'))
print("run_dfa(DFA_fig34, '101000') = ",
run_dfa(DFA_fig34, '101000'))
print("accepts_dfa(DFA_fig34, '101001') = ",
accepts_dfa(DFA_fig34, '101001'))
print("accepts_dfa(DFA_fig34, '101000') = ",
accepts_dfa(DFA_fig34, '101000'))
dotObj_dfa(DFA_fig34, "DFA_fig34")
# Run a complementation test
DFA_fig34_comp = comp_dfa(DFA_fig34)
dotObj_dfa(DFA_fig34_comp, "DFA_fig34_comp")
dotObj_dfa(DFA_fig34)
dotObj_dfa(DFA_fig34_comp, "DFA_fig34_comp")
dotObj_dfa(DFA_fig34_comp)
# One more test
du = union_dfa(DFA_fig34, DFA_fig34_comp)
dotObj_dfa(du, "orig union")
pdu = pruneUnreach(du)
pdu
pduObj = dotObj_dfa(pdu, "union of 34 and comp")
pduObj
D34 = { 'Q': {'A', 'IF', 'B'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B',
('A', '1'): 'A',
('B', '0'): 'IF',
('B', '1'): 'B' },
'q0': 'IF',
'F': {'IF'}
}
D34bl = { 'Q': {'A', 'IF', 'B', 'A1', 'B1'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B1',
('A', '1'): 'A1',
('A1', '0'): 'B',
('A1', '1'): 'A',
('B1', '0'): 'IF',
('B1', '1'): 'B',
('B','0') : 'IF',
('B', '1'): 'B1' },
'q0': 'IF',
'F': {'IF'}
}
d34 = dotObj_dfa(D34, "D34")
d34 # Display it!
langeq_dfa(D34,D34bl,False)
iso_dfa(D34,D34bl)
DFA_fig34
d34 = DFA_fig34
d34
d34c = DFA_fig34_comp
d34c
iso_dfa(d34,d34)
iso_dfa(d34,d34c)
d34v1 = {'Delta': {('A', '0'): 'B',
('A', '1'): 'B',
('B', '0'): 'IF',
('B', '1'): 'B',
('IF', '0'): 'A',
('IF', '1'): 'IF'},
'F': {'IF'},
'Q': {'A', 'B', 'IF'},
'Sigma': {'0', '1'},
'q0': 'IF'}
dotObj_dfa(d34v1)
d34v2 = {'Delta': {('A', '0'): 'B',
('A', '1'): 'B',
('B', '0'): 'IF',
('B', '1'): 'B',
('IF', '0'): 'A',
('IF', '1'): 'IF'},
'F': {'IF', 'B'},
'Q': {'A', 'B', 'IF'},
'Sigma': {'0', '1'},
'q0': 'IF'}
iso_dfa(d34,d34v1)
iso_dfa(d34,d34v2)
iso_dfa(d34v1,d34v2)
div1 = pruneUnreach(intersect_dfa(d34v1,d34v2))
dotObj_dfa(div1)
div2 = pruneUnreach(union_dfa(d34v1,d34v2))
dotObj_dfa(div2)
iso_dfa(div1,div2)
langeq_dfa(div1,div2,True)
d34bl = dotObj_dfa(D34bl, "D34bl")
d34bl # Display it!
d34bl = dotObj_dfa(D34bl, FuseEdges=True, dfaName="D34bl")
d34bl
iso_dfa(D34,D34bl)
langeq_dfa(D34,D34bl)
###Output
_____no_output_____
###Markdown
###Code
du
dotObj_dfa(pruneUnreach(D34bl), "D34bl")
### DFA minimization (another example)
Bloat1 = {'Q': {'S1', 'S3', 'S2', 'S5', 'S4', 'S6' },
'Sigma': {'b', 'a'},
'Delta': { ('S1','b') : 'S3',
('S1','a') : 'S2',
('S3','a') : 'S5',
('S2','a') : 'S4',
('S3','b') : 'S4',
('S2','b') : 'S5',
('S5','b') : 'S6',
('S5','a') : 'S6',
('S4','b') : 'S6',
('S4','a') : 'S6',
('S6','b') : 'S6',
('S6','a') : 'S6' },
'q0': 'S1',
'F': {'S2','S3','S6'}
}
Bloat1O = dotObj_dfa(Bloat1, dfaName="Bloat1")
Bloat1O # Display it!
dotObj_dfa(Bloat1, FuseEdges=True, dfaName="Bloat1")
bloated_dfa = md2mc('''
DFA
IS1 : a -> FS2
IS1 : b -> FS3
FS2 : a -> S4
FS2 : b -> S5
FS3 : a -> S5
FS3 : b -> S4
S4 : a | b -> FS6
S5 : a | b -> FS6
FS6 : a | b -> FS6
''')
dotObj_dfa(bloated_dfa)
dotObj_dfa(bloated_dfa).source
###Output
_____no_output_____
###Markdown
Now, here is how the computation proceeds for this example:-------------------------------------------------------- ``` Frame-0 Frame-1 Frame-2 S2 -1 S2 0 S2 0 S3 -1 -1 S3 0 -1 S3 0 -1 S4 -1 -1 -1 S4 -1 0 0 S4 2 0 0 S5 -1 -1 -1 -1 S5 -1 0 0 -1 S5 2 0 0 -1 S6 -1 -1 -1 -1 -1 S6 0 -1 -1 0 0 S6 0 1 1 0 0 S1 S2 S3 S4 S5 S1 S2 S3 S4 S5 S1 S2 S3 S4 S5 Initial 0-distinguishable 1-distinguishable Frame-3 Frame-4 = Frame-3S2 0S3 0 -1S4 2 0 0S5 2 0 0 -1S6 0 1 1 0 0 S1 S2 S3 S4 S5 2-distinguishable ``` Here is the algorithm, going frame by frame.- Initial Frame: The initial frame is drawn to clash all _combinations_ of states taken two at a time. Since we have 6 states, we have $6\choose 2$ = $15$ entries. We put a -1 against each such pair to denote that they have not been found distinguishable yet.- Frame *0-distinguishable*: We now put a 0 where a pair of states is 0-distinguishable. This means the states are distinguisable after consuming $\varepsilon$. This of course means that the states are themselves distinguishable. This is only possible if one is a final state and the other is not (in that case, one state, after consuming $\varepsilon$ accepts_dfa, and another state after consuming $\varepsilon$ does not accept. - So for instance, notice that (S3,S1) and (S4,S2) are 0-distinguishable, meaning that one is a final and the other is a non-final state.- Frame *1-distinguishable*: We now put a 1 where a pair of states is 1-distinguishable. This means the states are distinguisable after consuming a string of length $1$ (a single symbol). This is only possible if one state transitions to a final state and the other transitions to a non-final state after consuming a member of $\Sigma$. State pairs (S6,S2) and (S6,S3) are of this kind. While both S6 and S2 are final states (hence _0-indistinguishable_), after consuming an 'a' (or a 'b') they respectively go to a final/non-final state. This means that - after processing **the same symbol** one state -- let's say pre_p -- finds itself landing in a state p and another state -- let's say pre_q -- finds itself landing in a state q such that (p,q) is 0-distinguishable. - When this happens, states pre-p and pre-q are **1-distinguishable**.- Frame *2-distinguishable*: We now put a 2 where a pair of states is 2-distinguishable. This means the states are distinguisable after consuming a string of length $2$ (a string of length $2$). This is only possible if one state transitions to a state (say p) and the other transitions to state (say q) after consuming a member of $\Sigma$ such that (p,q) is **1-distinguishable**. State pairs (S5,S1) and (S4,S1) are 2-distinguishable because - after processing **the same symbol** one state -- let's say pre_p -- finds itself landing in a state p and another state -- let's say pre_q -- finds itself landing in a state q such that (p,q) is 0-distinguishable. - When this happens, states pre-p and pre-q are **1-distinguishable**. - One example is this: - S5 and S1 are 2-distinguishable. - This is because after seeing an 'aa', S1 lands in a non-final state while S5 lands in a final state - Observe that "aa" = "a" + "a" . Thus, after eating the first "a", S1 lands in S2 while S5 lands in S6, and (S2,S6) have already been deemed 1-distinguishable. - Thus, when we mark (S5,S1) as 2-distinguishable, we are sending the matrix entry at (S5,S2) from -1 to 2 - Now, in search of 3-distinguishability, we catch hold of all pairs in the matrix and see if we can send another -1 entry to "3". This appears not to happen. - Thus, if (S2,S3) is pushed via any sequence of symbols (any string) of any length, it always stays in the same type of state. Thus, after seeing 'ababba', S2 is in S6, while S3 is also in S6. - Thus, given no changes in the matrix, we stop.
###Code
dotObj_dfa(min_dfa(Bloat1), FuseEdges=True, dfaName="shrunkBloat1")
min_bloat = min_dfa(Bloat1)
dotObj_dfa(min_bloat).source
prd34b1 = pruneUnreach(D34bl)
dotObj_dfa(prd34b1, "prd34b1")
dotObj_dfa(min_dfa(prd34b1), "prd34b1min")
third1dfa=md2mc(src="File", fname="machines/dfafiles/thirdlastis1.dfa")
third1dfa
dotObj_dfa(third1dfa)
ends0101 =\
"\
DFA\
\
I : 0 -> S0 \
I : 1 -> I \
S0 : 0 -> S0 \
S0 : 1 -> S01 \
S01 : 0 -> S010 \
S01 : 1 -> I \
S010 : 0 -> S0 \
S010 : 1 -> F0101 \
F0101 : 0 -> S010 \
F0101 : 1 -> I \
"
ends0101
dfaends0101=md2mc(ends0101)
dfaends0101
dped1 = md2mc(src="File", fname="machines/dfafiles/pedagogical1.dfa")
#machines/dfafiles/pedagogical1.dfa
dped1
dotObj_dfa(dped1)
dotObj_dfa(dped1, FuseEdges=True)
dotObj_dfa(md2mc(ends0101))
thirdlastis1=md2mc(src="File", fname="machines/dfafiles/thirdlastis1.dfa")
#machines/dfafiles/thirdlastis1.dfa
thirdlastis1
dotObj_dfa(thirdlastis1)
dped1=md2mc(src="File", fname="machines/dfafiles/pedagogical2.dfa")
#machines/dfafiles/pedagogical2.dfa
dotObj_dfa(dped1)
secondLastIs1 = md2mc('''
!!------------------------------------------------------------
!! This DFA looks for patterns of the form ....1.
!! i.e., the second-last (counting from the end-point) is a 1
!!
!! DFAs find such patterns "very stressful to handle",
!! as they are kept guessing of the form 'are we there yet?'
!! 'are we seeing the second-last' ?
!! They must keep all the failure options at hand. Even after
!! a 'fleeting glimpse' of the second-last, more inputs can
!! come barreling-in to make that "lucky 1" a non-second-last.
!!
!! We take 7 states in the DFA solution.
!!------------------------------------------------------------
DFA
!!------------------------------------------------------------
!! State : in -> tostate !! comment
!!------------------------------------------------------------
I : 0 -> S0 !! Enter at init state I
I : 1 -> S1 !! Record bit seen in state letter
!! i.e., S0 means "state after seeing a 0"
S0 : 0 -> S00 !! continue recording input seen
S0 : 1 -> S01 !! in state-letter. This is a problem-specific
!! way of compressing the input seen so far.
S1 : 0 -> F10 !! We now have a "second last" available!
S1 : 1 -> F11 !! Both F10 and F10 are "F" (final)
S00 : 0 -> S00 !! History of things seen is still 00
S00 : 1 -> S01 !! Remember 01 in the state
S01 : 0 -> F10 !! We again have a second-last of 1
S01 : 1 -> F11 !! We are in F11 because of 11 being last seen
F10 : 0 -> S00 !! The second-last 1 gets pushed-out
F10 : 1 -> S01 !! The second-last 1 gets pushed-out here too
F11 : 0 -> F10 !! Still we have a second-last 1
F11 : 1 -> F11 !! Stay in F11, as last two seen are 11
!!------------------------------------------------------------
''')
from math import floor, log, pow
def nthnumeric(N, Sigma={'a','b'}):
"""Assume Sigma is a 2-sized list/set of chars (default {'a','b'}).
Produce the Nth string in numeric order, where N >= 0.
Idea : Given N, get b = floor(log_2(N+1)) - need that
many places; what to fill in the places is the binary
code for N - (2^b - 1) with 0 as Sigma[0] and 1 as Sigma[1].
"""
if (type(Sigma)==set):
S = list(Sigma)
else:
assert(type(Sigma)==list
), "Expected to be given set/list for arg2 of nthnumeric."
S = Sigma
assert(len(Sigma)==2
),"Expected to be given a Sigma of length 2."
if(N==0):
return ''
else:
width = floor(log(N+1, 2))
tofill = int(N - pow(2, width) + 1)
relevant_binstr = bin(tofill)[2::] # strip the 0b
# in the leading string
len_to_makeup = width - len(relevant_binstr)
return (S[0]*len_to_makeup +
shomo(relevant_binstr,
lambda x: S[1] if x=='1' else S[0]))
nthnumeric(20,['0','1'])
run_dfa(secondLastIs1, '0101')
accepts_dfa(secondLastIs1, '0101')
tests = [ nthnumeric(i, ['0','1']) for i in range(12) ]
for t in tests:
if accepts_dfa(secondLastIs1, t):
print("This DFA accepts ", t)
else:
print("This DFA rejects ", t)
help(run_dfa)
###Output
Help on function run_dfa in module jove.Def_DFA:
run_dfa(D, s)
In : D (consistent DFA)
s (string over D's sigma, including "")
Out: next state of D["q0"] via string s
###Markdown
This is an extensive illustration of union, intersection and complementation, DFA minimization, isomorphism test, language equivalence test, and an application of DeMorgan's law
###Code
dfaOdd1s = md2mc('''
DFA
I : 0 -> I
I : 1 -> F
F : 0 -> F
F : 1 -> I
''')
dotObj_dfa(dfaOdd1s)
dotObj_dfa(dfaOdd1s).source
ends0101 = md2mc('''
DFA
I : 0 -> S0
I : 1 -> I
S0 : 0 -> S0
S0 : 1 -> S01
S01 : 0 -> S010
S01 : 1 -> I
S010 : 0 -> S0
S010 : 1 -> F0101
F0101 : 0 -> S010
F0101 : 1 -> I
''')
dotObj_dfa(ends0101)
dotObj_dfa(ends0101).source
odd1sORends0101 = union_dfa(dfaOdd1s,ends0101)
dotObj_dfa(odd1sORends0101)
dotObj_dfa(odd1sORends0101)
dotObj_dfa(odd1sORends0101).source
Minodd1sORends0101 = min_dfa(odd1sORends0101)
dotObj_dfa(Minodd1sORends0101)
dotObj_dfa(Minodd1sORends0101).source
iso_dfa(odd1sORends0101, Minodd1sORends0101)
langeq_dfa(odd1sORends0101, Minodd1sORends0101)
odd1sANDends0101 = intersect_dfa(dfaOdd1s,ends0101)
dotObj_dfa(odd1sANDends0101)
Minodd1sANDends0101 = min_dfa(odd1sANDends0101)
dotObj_dfa(Minodd1sANDends0101)
dotObj_dfa(Minodd1sANDends0101).source
CdfaOdd1s = comp_dfa(dfaOdd1s)
Cends0101 = comp_dfa(ends0101)
C_CdfaOdd1sORCends0101 = comp_dfa(union_dfa(CdfaOdd1s, Cends0101))
dotObj_dfa(C_CdfaOdd1sORCends0101)
MinC_CdfaOdd1sORCends0101 = min_dfa(C_CdfaOdd1sORCends0101)
dotObj_dfa(MinC_CdfaOdd1sORCends0101)
iso_dfa(MinC_CdfaOdd1sORCends0101, Minodd1sANDends0101)
blimp = md2mc('''
DFA
I1 : a -> F2
I1 : b -> F3
F2 : a -> S8
F2 : b -> S5
F3 : a -> S7
F3 : b -> S4
S4 : a | b -> F6
S5 : a | b -> F6
F6 : a | b -> F6
S7 : a | b -> F6
S8 : a -> F6
S8 : b -> F9
F9 : a -> F9
F9 : b -> F6
''')
dblimp = dotObj_dfa(blimp)
dblimp
dblimp = dotObj_dfa(blimp, FuseEdges=True)
dblimp
dblimp.source
mblimp = min_dfa(blimp)
dmblimp = dotObj_dfa(mblimp)
dmblimp
###Output
_____no_output_____
###Markdown
This shows how DeMorgan's Law applies to DFAs. It also shows how, using the tools provided to us, we can continually check our work.
###Code
testdfa = md2mc('''DFA
I : 0 -> I
I : 1 -> F
F : 0 -> I
''')
testdfa
tot_testdfa = totalize_dfa(testdfa)
dotObj_dfa(tot_testdfa)
dotObj_dfa_w_bh
dotObj_dfa_w_bh(tot_testdfa)
dotObj_dfa_w_bh(tot_testdfa, FuseEdges = True)
###Output
_____no_output_____
###Markdown
**Here is how we will represent a DFA in Python (taking Figure 3.4's example from the book). You can clearly see how the traits of the DFA are encoded. We prefer a Python dictionary, as it supports a number of convenient operations, and also one can add additional fields easily. **
###Code
DFA_fig34 = { 'Q': {'A', 'IF', 'B'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B',
('A', '1'): 'A',
('B', '0'): 'IF',
('B', '1'): 'B' },
'q0': 'IF',
'F': {'IF'}
}
###Output
_____no_output_____
###Markdown
**We can now write routines to print DFA using dot. The main routines are listed below.** * dot_dfa_w_bh : lists all states of a DFA including black-hole states* dot_dfa : lists all isNotBH states (see below for a defn), i.e. suppress black-holes - Usually there are too many transitions to them and that clutters the view ======
###Code
# Some tests pertaining to totalize_dfa, is_consistent_dfa, etc
DFA_fig34 = { 'Q': {'A', 'IF', 'B'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B',
('A', '1'): 'A',
('B', '0'): 'IF',
('B', '1'): 'B' },
'q0': 'IF',
'F': {'IF'}
}
def tests_dfa_consist():
"""Some tests wrt DFA routines.
"""
DFA_fig34_Q = DFA_fig34["Q"]
DFA_fig34_Sigma = DFA_fig34["Sigma"]
randQ = random.choice(list(DFA_fig34_Q))
randSym = random.choice(list(DFA_fig34_Sigma))
DFA_fig34_deepcopy = copy.deepcopy(DFA_fig34)
print('is_consistent_dfa(DFA_fig34) =',
is_consistent_dfa(DFA_fig34) )
print('Removing mapping for ' +
"(" + randQ + "," + randSym + ")" +
"from DFA_fig34_deepcopy")
DFA_fig34_deepcopy["Delta"].pop((randQ,randSym))
print('is_consistent_dfa(DFA_fig34_deepcopy) =',
is_consistent_dfa(DFA_fig34_deepcopy) )
totalized = totalize_dfa(DFA_fig34_deepcopy)
print ( 'is_consistent_dfa(totalized) =',
is_consistent_dfa(totalized) )
assert(totalized == totalize_dfa(totalized)) # Must pass
dfaBESame = md2mc('''
DFA !! Begins and ends with same; epsilon allowed
IF : 0 -> F0
IF : 1 -> F1
!!
F0 : 0 -> F0
F0 : 1 -> S01
S01 : 1 -> S01
S01 : 0 -> F0
!!
F1 : 1 -> F1
F1 : 0 -> S10
S10 : 0 -> S10
S10 : 1 -> F1
''')
DOdfaBESame = dotObj_dfa(dfaBESame)
DOdfaBESame
DOdfaBESame.source
###Output
_____no_output_____
###Markdown
Let us now administer some tests to print dot-strings generated.We will demonstrate two ways to print automata: 1. First generate a dot string via dot_dfa or dot_dfa_w_bh (calling the result "dot_string") 1. Then use the srcObj = Source(dot_string) call 2. Thereafter we can display the srcObj object directly into the browser 3. Or, one can also later convert the dot_string to svg or PDF2. OR, one can directly generate a dot object via the dotObj_dfa or dotObj_dfa_w_bh call (calling the result "dot_object") 1. Then directly display the dot_object 2. There are conversions available for dot_object to other formats too
###Code
DFA_fig34 = { 'Q': {'A', 'IF', 'B'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B',
('A', '1'): 'A',
('B', '0'): 'IF',
('B', '1'): 'B' },
'q0': 'IF',
'F': {'IF'}
}
def dfa_dot_tests():
"""Some dot-routine related tests.
"""
dot_string = dot_dfa(DFA_fig34)
dot_object1 = Source(dot_string)
return dot_object1.source
###Output
_____no_output_____
###Markdown
Let us test functions step_dfa, run_dfa, and accepts_dfa
###Code
# Some tests of step, run, etc.
DFA_fig34 = { 'Q': {'A', 'IF', 'B'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B',
('A', '1'): 'A',
('B', '0'): 'IF',
('B', '1'): 'B' },
'q0': 'IF',
'F': {'IF'}
}
def step_run_accepts_tests():
print("step_dfa(DFA_fig34, 'IF', '1') = ",
step_dfa(DFA_fig34, 'IF', '1'))
print("step_dfa(DFA_fig34, 'A', '0') = ",
step_dfa(DFA_fig34, 'A', '0'))
print("run_dfa(DFA_fig34, '101001') = ",
run_dfa(DFA_fig34, '101001'))
print("run_dfa(DFA_fig34, '101000') = ",
run_dfa(DFA_fig34, '101000'))
print("accepts_dfa(DFA_fig34, '101001') = ",
accepts_dfa(DFA_fig34, '101001'))
print("accepts_dfa(DFA_fig34, '101000') = ",
accepts_dfa(DFA_fig34, '101000'))
dotObj_dfa(DFA_fig34, "DFA_fig34")
# Run a complementation test
DFA_fig34_comp = comp_dfa(DFA_fig34)
dotObj_dfa(DFA_fig34_comp, "DFA_fig34_comp")
dotObj_dfa(DFA_fig34)
dotObj_dfa(DFA_fig34_comp, "DFA_fig34_comp")
dotObj_dfa(DFA_fig34_comp)
# One more test
du = union_dfa(DFA_fig34, DFA_fig34_comp)
dotObj_dfa(du, "orig union")
pdu = pruneUnreach(du)
pdu
pduObj = dotObj_dfa(pdu, "union of 34 and comp")
pduObj
D34 = { 'Q': {'A', 'IF', 'B'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B',
('A', '1'): 'A',
('B', '0'): 'IF',
('B', '1'): 'B' },
'q0': 'IF',
'F': {'IF'}
}
D34bl = { 'Q': {'A', 'IF', 'B', 'A1', 'B1'},
'Sigma': {'0', '1'},
'Delta': { ('IF', '0'): 'A',
('IF', '1'): 'IF',
('A', '0'): 'B1',
('A', '1'): 'A1',
('A1', '0'): 'B',
('A1', '1'): 'A',
('B1', '0'): 'IF',
('B1', '1'): 'B',
('B','0') : 'IF',
('B', '1'): 'B1' },
'q0': 'IF',
'F': {'IF'}
}
d34 = dotObj_dfa(D34, "D34")
d34 # Display it!
langeq_dfa(D34,D34bl,False)
iso_dfa(D34,D34bl)
DFA_fig34
d34 = DFA_fig34
d34
d34c = DFA_fig34_comp
d34c
iso_dfa(d34,d34)
iso_dfa(d34,d34c)
d34v1 = {'Delta': {('A', '0'): 'B',
('A', '1'): 'B',
('B', '0'): 'IF',
('B', '1'): 'B',
('IF', '0'): 'A',
('IF', '1'): 'IF'},
'F': {'IF'},
'Q': {'A', 'B', 'IF'},
'Sigma': {'0', '1'},
'q0': 'IF'}
dotObj_dfa(d34v1)
d34v2 = {'Delta': {('A', '0'): 'B',
('A', '1'): 'B',
('B', '0'): 'IF',
('B', '1'): 'B',
('IF', '0'): 'A',
('IF', '1'): 'IF'},
'F': {'IF', 'B'},
'Q': {'A', 'B', 'IF'},
'Sigma': {'0', '1'},
'q0': 'IF'}
iso_dfa(d34,d34v1)
iso_dfa(d34,d34v2)
iso_dfa(d34v1,d34v2)
div1 = pruneUnreach(intersect_dfa(d34v1,d34v2))
dotObj_dfa(div1)
div2 = pruneUnreach(union_dfa(d34v1,d34v2))
dotObj_dfa(div2)
iso_dfa(div1,div2)
langeq_dfa(div1,div2,True)
d34bl = dotObj_dfa(D34bl, "D34bl")
d34bl # Display it!
d34bl = dotObj_dfa(D34bl, FuseEdges=True, dfaName="D34bl")
d34bl
iso_dfa(D34,D34bl)
langeq_dfa(D34,D34bl)
###Output
_____no_output_____
###Markdown
###Code
du
dotObj_dfa(pruneUnreach(D34bl), "D34bl")
### DFA minimization (another example)
Bloat1 = {'Q': {'S1', 'S3', 'S2', 'S5', 'S4', 'S6' },
'Sigma': {'b', 'a'},
'Delta': { ('S1','b') : 'S3',
('S1','a') : 'S2',
('S3','a') : 'S5',
('S2','a') : 'S4',
('S3','b') : 'S4',
('S2','b') : 'S5',
('S5','b') : 'S6',
('S5','a') : 'S6',
('S4','b') : 'S6',
('S4','a') : 'S6',
('S6','b') : 'S6',
('S6','a') : 'S6' },
'q0': 'S1',
'F': {'S2','S3','S6'}
}
Bloat1O = dotObj_dfa(Bloat1, dfaName="Bloat1")
Bloat1O # Display it!
dotObj_dfa(Bloat1, FuseEdges=True, dfaName="Bloat1")
bloated_dfa = md2mc('''
DFA
IS1 : a -> FS2
IS1 : b -> FS3
FS2 : a -> S4
FS2 : b -> S5
FS3 : a -> S5
FS3 : b -> S4
S4 : a | b -> FS6
S5 : a | b -> FS6
FS6 : a | b -> FS6
''')
dotObj_dfa(bloated_dfa)
dotObj_dfa(bloated_dfa).source
###Output
_____no_output_____
###Markdown
Now, here is how the computation proceeds for this example:-------------------------------------------------------- ``` Frame-0 Frame-1 Frame-2 S2 -1 S2 0 S2 0 S3 -1 -1 S3 0 -1 S3 0 -1 S4 -1 -1 -1 S4 -1 0 0 S4 2 0 0 S5 -1 -1 -1 -1 S5 -1 0 0 -1 S5 2 0 0 -1 S6 -1 -1 -1 -1 -1 S6 0 -1 -1 0 0 S6 0 1 1 0 0 S1 S2 S3 S4 S5 S1 S2 S3 S4 S5 S1 S2 S3 S4 S5 Initial 0-distinguishable 1-distinguishable Frame-3 Frame-4 = Frame-3S2 0S3 0 -1S4 2 0 0S5 2 0 0 -1S6 0 1 1 0 0 S1 S2 S3 S4 S5 2-distinguishable ``` Here is the algorithm, going frame by frame.- Initial Frame: The initial frame is drawn to clash all _combinations_ of states taken two at a time. Since we have 6 states, we have $6\choose 2$ = $15$ entries. We put a -1 against each such pair to denote that they have not been found distinguishable yet.- Frame *0-distinguishable*: We now put a 0 where a pair of states is 0-distinguishable. This means the states are distinguisable after consuming $\varepsilon$. This of course means that the states are themselves distinguishable. This is only possible if one is a final state and the other is not (in that case, one state, after consuming $\varepsilon$ accepts_dfa, and another state after consuming $\varepsilon$ does not accept. - So for instance, notice that (S3,S1) and (S4,S2) are 0-distinguishable, meaning that one is a final and the other is a non-final state.- Frame *1-distinguishable*: We now put a 1 where a pair of states is 1-distinguishable. This means the states are distinguisable after consuming a string of length $1$ (a single symbol). This is only possible if one state transitions to a final state and the other transitions to a non-final state after consuming a member of $\Sigma$. State pairs (S6,S2) and (S6,S3) are of this kind. While both S6 and S2 are final states (hence _0-indistinguishable_), after consuming an 'a' (or a 'b') they respectively go to a final/non-final state. This means that - after processing **the same symbol** one state -- let's say pre_p -- finds itself landing in a state p and another state -- let's say pre_q -- finds itself landing in a state q such that (p,q) is 0-distinguishable. - When this happens, states pre-p and pre-q are **1-distinguishable**.- Frame *2-distinguishable*: We now put a 2 where a pair of states is 2-distinguishable. This means the states are distinguisable after consuming a string of length $2$ (a string of length $2$). This is only possible if one state transitions to a state (say p) and the other transitions to state (say q) after consuming a member of $\Sigma$ such that (p,q) is **1-distinguishable**. State pairs (S5,S1) and (S4,S1) are 2-distinguishable because - after processing **the same symbol** one state -- let's say pre_p -- finds itself landing in a state p and another state -- let's say pre_q -- finds itself landing in a state q such that (p,q) is 0-distinguishable. - When this happens, states pre-p and pre-q are **1-distinguishable**. - One example is this: - S5 and S1 are 2-distinguishable. - This is because after seeing an 'aa', S1 lands in a non-final state while S5 lands in a final state - Observe that "aa" = "a" + "a" . Thus, after eating the first "a", S1 lands in S2 while S5 lands in S6, and (S2,S6) have already been deemed 1-distinguishable. - Thus, when we mark (S5,S1) as 2-distinguishable, we are sending the matrix entry at (S5,S2) from -1 to 2 - Now, in search of 3-distinguishability, we catch hold of all pairs in the matrix and see if we can send another -1 entry to "3". This appears not to happen. - Thus, if (S2,S3) is pushed via any sequence of symbols (any string) of any length, it always stays in the same type of state. Thus, after seeing 'ababba', S2 is in S6, while S3 is also in S6. - Thus, given no changes in the matrix, we stop.
###Code
dotObj_dfa(min_dfa(Bloat1), FuseEdges=True, dfaName="shrunkBloat1")
min_bloat = min_dfa(Bloat1)
dotObj_dfa(min_bloat).source
prd34b1 = pruneUnreach(D34bl)
dotObj_dfa(prd34b1, "prd34b1")
dotObj_dfa(min_dfa(prd34b1), "prd34b1min")
third1dfa=md2mc(src="File", fname="machines/dfafiles/thirdlastis1.dfa")
third1dfa
dotObj_dfa(third1dfa)
ends0101 =\
"\
DFA\
\
I : 0 -> S0 \
I : 1 -> I \
S0 : 0 -> S0 \
S0 : 1 -> S01 \
S01 : 0 -> S010 \
S01 : 1 -> I \
S010 : 0 -> S0 \
S010 : 1 -> F0101 \
F0101 : 0 -> S010 \
F0101 : 1 -> I \
"
ends0101
dfaends0101=md2mc(ends0101)
dfaends0101
dped1 = md2mc(src="File", fname="machines/dfafiles/pedagogical1.dfa")
#machines/dfafiles/pedagogical1.dfa
dped1
dotObj_dfa(dped1)
dotObj_dfa(dped1, FuseEdges=True)
dotObj_dfa(md2mc(ends0101))
thirdlastis1=md2mc(src="File", fname="machines/dfafiles/thirdlastis1.dfa")
#machines/dfafiles/thirdlastis1.dfa
thirdlastis1
dotObj_dfa(thirdlastis1)
dped1=md2mc(src="File", fname="machines/dfafiles/pedagogical2.dfa")
#machines/dfafiles/pedagogical2.dfa
dotObj_dfa(dped1)
secondLastIs1 = md2mc('''
!!------------------------------------------------------------
!! This DFA looks for patterns of the form ....1.
!! i.e., the second-last (counting from the end-point) is a 1
!!
!! DFAs find such patterns "very stressful to handle",
!! as they are kept guessing of the form 'are we there yet?'
!! 'are we seeing the second-last' ?
!! They must keep all the failure options at hand. Even after
!! a 'fleeting glimpse' of the second-last, more inputs can
!! come barreling-in to make that "lucky 1" a non-second-last.
!!
!! We take 7 states in the DFA solution.
!!------------------------------------------------------------
DFA
!!------------------------------------------------------------
!! State : in -> tostate !! comment
!!------------------------------------------------------------
I : 0 -> S0 !! Enter at init state I
I : 1 -> S1 !! Record bit seen in state letter
!! i.e., S0 means "state after seeing a 0"
S0 : 0 -> S00 !! continue recording input seen
S0 : 1 -> S01 !! in state-letter. This is a problem-specific
!! way of compressing the input seen so far.
S1 : 0 -> F10 !! We now have a "second last" available!
S1 : 1 -> F11 !! Both F10 and F10 are "F" (final)
S00 : 0 -> S00 !! History of things seen is still 00
S00 : 1 -> S01 !! Remember 01 in the state
S01 : 0 -> F10 !! We again have a second-last of 1
S01 : 1 -> F11 !! We are in F11 because of 11 being last seen
F10 : 0 -> S00 !! The second-last 1 gets pushed-out
F10 : 1 -> S01 !! The second-last 1 gets pushed-out here too
F11 : 0 -> F10 !! Still we have a second-last 1
F11 : 1 -> F11 !! Stay in F11, as last two seen are 11
!!------------------------------------------------------------
''')
from math import floor, log, pow
def nthnumeric(N, Sigma={'a','b'}):
"""Assume Sigma is a 2-sized list/set of chars (default {'a','b'}).
Produce the Nth string in numeric order, where N >= 0.
Idea : Given N, get b = floor(log_2(N+1)) - need that
many places; what to fill in the places is the binary
code for N - (2^b - 1) with 0 as Sigma[0] and 1 as Sigma[1].
"""
if (type(Sigma)==set):
S = list(Sigma)
else:
assert(type(Sigma)==list
), "Expected to be given set/list for arg2 of nthnumeric."
S = Sigma
assert(len(Sigma)==2
),"Expected to be given a Sigma of length 2."
if(N==0):
return ''
else:
width = floor(log(N+1, 2))
tofill = int(N - pow(2, width) + 1)
relevant_binstr = bin(tofill)[2::] # strip the 0b
# in the leading string
len_to_makeup = width - len(relevant_binstr)
return (S[0]*len_to_makeup +
shomo(relevant_binstr,
lambda x: S[1] if x=='1' else S[0]))
nthnumeric(20,['0','1'])
run_dfa(secondLastIs1, '0101')
accepts_dfa(secondLastIs1, '0101')
tests = [ nthnumeric(i, ['0','1']) for i in range(12) ]
for t in tests:
if accepts_dfa(secondLastIs1, t):
print("This DFA accepts ", t)
else:
print("This DFA rejects ", t)
help(run_dfa)
###Output
Help on function run_dfa in module jove.Def_DFA:
run_dfa(D, s)
In : D (consistent DFA)
s (string over D's sigma, including "")
Out: next state of D["q0"] via string s
###Markdown
This is an extensive illustration of union, intersection and complementation, DFA minimization, isomorphism test, language equivalence test, and an application of DeMorgan's law
###Code
dfaOdd1s = md2mc('''
DFA
I : 0 -> I
I : 1 -> F
F : 0 -> F
F : 1 -> I
''')
dotObj_dfa(dfaOdd1s)
dotObj_dfa(dfaOdd1s).source
ends0101 = md2mc('''
DFA
I : 0 -> S0
I : 1 -> I
S0 : 0 -> S0
S0 : 1 -> S01
S01 : 0 -> S010
S01 : 1 -> I
S010 : 0 -> S0
S010 : 1 -> F0101
F0101 : 0 -> S010
F0101 : 1 -> I
''')
dotObj_dfa(ends0101)
dotObj_dfa(ends0101).source
odd1sORends0101 = union_dfa(dfaOdd1s,ends0101)
dotObj_dfa(odd1sORends0101)
dotObj_dfa(odd1sORends0101)
dotObj_dfa(odd1sORends0101).source
Minodd1sORends0101 = min_dfa(odd1sORends0101)
dotObj_dfa(Minodd1sORends0101)
dotObj_dfa(Minodd1sORends0101).source
iso_dfa(odd1sORends0101, Minodd1sORends0101)
langeq_dfa(odd1sORends0101, Minodd1sORends0101)
odd1sANDends0101 = intersect_dfa(dfaOdd1s,ends0101)
dotObj_dfa(odd1sANDends0101)
Minodd1sANDends0101 = min_dfa(odd1sANDends0101)
dotObj_dfa(Minodd1sANDends0101)
dotObj_dfa(Minodd1sANDends0101).source
CdfaOdd1s = comp_dfa(dfaOdd1s)
Cends0101 = comp_dfa(ends0101)
C_CdfaOdd1sORCends0101 = comp_dfa(union_dfa(CdfaOdd1s, Cends0101))
dotObj_dfa(C_CdfaOdd1sORCends0101)
MinC_CdfaOdd1sORCends0101 = min_dfa(C_CdfaOdd1sORCends0101)
dotObj_dfa(MinC_CdfaOdd1sORCends0101)
iso_dfa(MinC_CdfaOdd1sORCends0101, Minodd1sANDends0101)
blimp = md2mc('''
DFA
I1 : a -> F2
I1 : b -> F3
F2 : a -> S8
F2 : b -> S5
F3 : a -> S7
F3 : b -> S4
S4 : a | b -> F6
S5 : a | b -> F6
F6 : a | b -> F6
S7 : a | b -> F6
S8 : a -> F6
S8 : b -> F9
F9 : a -> F9
F9 : b -> F6
''')
dblimp = dotObj_dfa(blimp)
dblimp
dblimp = dotObj_dfa(blimp, FuseEdges=True)
dblimp
dblimp.source
mblimp = min_dfa(blimp)
dmblimp = dotObj_dfa(mblimp)
dmblimp
###Output
_____no_output_____
###Markdown
This shows how DeMorgan's Law applies to DFAs. It also shows how, using the tools provided to us, we can continually check our work.
###Code
testdfa = md2mc('''DFA
I : 0 -> I
I : 1 -> F
F : 0 -> I
''')
testdfa
tot_testdfa = totalize_dfa(testdfa)
dotObj_dfa(tot_testdfa)
dotObj_dfa_w_bh
dotObj_dfa_w_bh(tot_testdfa)
dotObj_dfa_w_bh(tot_testdfa, FuseEdges = True)
###Output
_____no_output_____ |
tutorials/timeseries_opf.ipynb | ###Markdown
Time series calculation with OPF This tutorial shows how a simple time series simulation with optimal power flow is performed with the timeseries and control module in pandapower.
###Code
import pandapower as pp
import numpy as np
import os
import pandas as pd
import tempfile
from pandapower.timeseries import DFData
from pandapower.timeseries import OutputWriter
from pandapower.timeseries.run_time_series import run_timeseries
from pandapower.control import ConstControl
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
We created a simple network and then set the constraints for buses, lines, and external grid. We set the costs for the external grid and sgen. The cost of sgen is kept negative to maximize the generation, sgen can be controlled with OPF but not the load.
###Code
def simple_test_net():
net = pp.create_empty_network()
b0 = pp.create_bus(net, 110, min_vm_pu=0.98, max_vm_pu=1.05)
b1 = pp.create_bus(net, 20, min_vm_pu=0.98, max_vm_pu=1.05)
b2 = pp.create_bus(net, 20, min_vm_pu=0.98, max_vm_pu=1.05)
b3 = pp.create_bus(net, 20, min_vm_pu=0.9, max_vm_pu=1.05)
e=pp.create_ext_grid(net, b0, min_p_mw=-200, max_p_mw=200)
costeg = pp.create_poly_cost(net, e, 'ext_grid', cp1_eur_per_mw=10)
pp.create_line(net, b0, b1, 10, "149-AL1/24-ST1A 110.0", max_loading_percent=80)
pp.create_line(net, b1, b2, 10, "149-AL1/24-ST1A 110.0", max_loading_percent=80)
pp.create_line(net, b1, b3, 10, "149-AL1/24-ST1A 110.0", max_loading_percent=80)
pp.create_load(net, b3, p_mw=10., q_mvar=-5., name='load1', controllable=False)
g1=pp.create_sgen(net, b2, p_mw=0., q_mvar=-2, min_p_mw=0, max_p_mw=30, min_q_mvar=-3, max_q_mvar=3, name='sgen1', controllable=True)
pp.create_poly_cost(net, g1, 'sgen1', cp1_eur_per_mw=-1)
return net
###Output
_____no_output_____
###Markdown
We created the datasource (which contains the time series P values) and defined the range of the load power values so that the line that connects it will not be overloaded because the load will not be controlled with the OPF.
###Code
def create_data_source(n_timesteps=24):
profiles = pd.DataFrame()
profiles['load1_p'] = np.random.random(n_timesteps) * 10.
profiles['sgen1_p'] = np.random.random(n_timesteps) * 20.
ds = DFData(profiles)
return profiles, ds
###Output
_____no_output_____
###Markdown
We created the controllers to update the P values of the load and the sgen
###Code
def create_controllers(net, ds):
ConstControl(net, element='load', variable='p_mw', element_index=[0],
data_source=ds, profile_name=["load1_p"])
ConstControl(net, element='sgen', variable='p_mw', element_index=[0],
data_source=ds, profile_name=["sgen1_p"])
###Output
_____no_output_____
###Markdown
Instead of saving the whole net (which takes a lot of time), we extract only predefined outputs. The variables of create_output_writer are saved to the hard drive after the time series loop.
###Code
def create_output_writer(net, time_steps, output_dir):
ow = OutputWriter(net, time_steps, output_path=output_dir, output_file_type=".xlsx", log_variables=list())
ow.log_variable('res_sgen', 'p_mw')
ow.log_variable('res_bus', 'vm_pu')
ow.log_variable('res_line', 'loading_percent')
ow.log_variable('res_line', 'i_ka')
return ow
###Output
_____no_output_____
###Markdown
Lets run the code for the timeseries simulation with OPF. Note that parameter 'run' is set to the function that runs OPF (run=pp.runopp).
###Code
output_dir = os.path.join(tempfile.gettempdir(), "time_series_example")
print("Results can be found in your local temp folder: {}".format(output_dir))
if not os.path.exists(output_dir):
os.mkdir(output_dir)
# create the network
net = simple_test_net()
# create (random) data source
n_timesteps = 24
profiles, ds = create_data_source(n_timesteps)
# create controllers (to control P values of the load and the sgen)
create_controllers(net, ds)
# time steps to be calculated. Could also be a list with non-consecutive time steps
time_steps = range(0, n_timesteps)
# the output writer with the desired results to be stored to files.
ow = create_output_writer(net, time_steps, output_dir=output_dir)
# the main time series function with optimal power flow
run_timeseries(net, time_steps, run=pp.runopp)
###Output
Results can be found in your local temp folder: C:\Users\ssnigdha\AppData\Local\Temp\time_series_example
Progress: |██████████████████████████████████████████████████| 100.0% Complete
###Markdown
We can see that all of the bus voltages are in the defined constraint range according to the optimal power flow.
###Code
# voltage results
vm_pu_file = os.path.join(output_dir, "res_bus", "vm_pu.xlsx")
vm_pu = pd.read_excel(vm_pu_file, index_col=0)
vm_pu.plot(label="vm_pu")
plt.xlabel("time step")
plt.ylabel("voltage mag. [p.u.]")
plt.title("Voltage Magnitude")
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
The loading_percent of the lines are also below 80% as defined the constraints for optimal power flow.
###Code
# line loading results
ll_file = os.path.join(output_dir, "res_line", "loading_percent.xlsx")
line_loading = pd.read_excel(ll_file, index_col=0)
line_loading.plot()
plt.xlabel("time step")
plt.ylabel("line loading [%]")
plt.title("Line Loading")
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Here we compared the sgen power generation before and after OPF.
###Code
# sgen results
sgen_file = os.path.join(output_dir, "res_sgen", "p_mw.xlsx")
sgen = pd.read_excel(sgen_file, index_col=0)
ax=sgen[0].plot(label="sgen (after OPF)")
ds.df.sgen1_p.plot(ax=ax, label="sgen (original)", linestyle='--')
ax.legend()
plt.xlabel("time step")
plt.ylabel("P [MW]")
plt.grid()
plt.show()
###Output
_____no_output_____ |
notebooks/basic_ml/04_Linear_Regression.ipynb | ###Markdown
Linear RegressionIn this lesson we will learn about linear regression. We will understand the basic math behind it, implement it in just NumPy and then in TensorFlow + Keras. Overview Our goal is to learn a linear model $\hat{y}$ that models $y$ given $X$. $\hat{y} = XW + b$* $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$ ($N$ is the number of samples)* $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features)* $W$ = weights | $\in \mathbb{R}^{DX1}$ * $b$ = bias | $\in \mathbb{R}^{1}$ * **Objective:** Use inputs $X$ to predict the output $\hat{y}$ using a linear model. The model will be a line of best fit that minimizes the distance between the predicted (model's output) and target (ground truth) values. Training data $(X, y)$ is used to train the model and learn the weights $W$ using gradient descent.* **Advantages:** * Computationally simple. * Highly interpretable. * Can account for continuous and categorical features.* **Disadvantages:** * The model will perform well only when the data is linearly separable (for classification). * Usually not used for classification and only for regression.* **Miscellaneous:** You can also use linear regression for binary classification tasks where if the predicted continuous value is above a threshold, it belongs to a certain class. But we will cover better techniques for classification in future lessons and will focus on linear regression for continuous regression tasks only. Generate data We're going to create some simple dummy data to apply linear regression on. It's going to create roughly linear data (`y = 3.5X + noise`); the random noise is added to create realistic data that doesn't perfectly align in a line. Our goal is to have the model converge to a similar linear equation (there will be slight variance since we added some noise).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
SEED = 1234
NUM_SAMPLES = 50
# Set seed for reproducibility
np.random.seed(SEED)
# Generate synthetic data
def generate_data(num_samples):
"""Generate dummy data for linear regression."""
X = np.array(range(num_samples))
random_noise = np.random.uniform(-10,20,size=num_samples)
y = 3.5*X + random_noise # add some noise
return X, y
# Generate random (linear) data
X, y = generate_data(num_samples=NUM_SAMPLES)
data = np.vstack([X, y]).T
print (data[:5])
# Load into a Pandas DataFrame
df = pd.DataFrame(data, columns=['X', 'y'])
X = df[['X']].values
y = df[['y']].values
df.head()
# Scatter plot
plt.title("Generated data")
plt.scatter(x=df['X'], y=df['y'])
plt.show()
###Output
_____no_output_____
###Markdown
NumPy Now that we have our data prepared, we'll first implement linear regression using just NumPy. This will let us really understand the underlying operations. Data Split data Since our task is a regression task, we will randomly split our dataset into **three** sets: train, validation and test data splits.* train: used to train our model.* val : used to validate our model's performance during training.* test: used to do an evaluation of our fully trained model.
###Code
TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15
SHUFFLE = True
# Shuffle data
if SHUFFLE:
indices = list(range(NUM_SAMPLES))
np.random.shuffle(indices)
X = X[indices]
y = y[indices]
###Output
_____no_output_____
###Markdown
**NOTE**: Be careful not to shuffle X and y separately because then the inputs won't correspond to the outputs!
###Code
# Split indices
train_start = 0
train_end = int(0.7*NUM_SAMPLES)
val_start = train_end
val_end = int((TRAIN_SIZE+VAL_SIZE)*NUM_SAMPLES)
test_start = val_end
# Split data
X_train = X[train_start:train_end]
y_train = y[train_start:train_end]
X_val = X[val_start:val_end]
y_val = y[val_start:val_end]
X_test = X[test_start:]
y_test = y[test_start:]
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_test: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
###Output
X_train: (35, 1), y_train: (35, 1)
X_val: (7, 1), y_test: (7, 1)
X_test: (8, 1), y_test: (8, 1)
###Markdown
Standardize data We need to standardize our data (zero mean and unit variance) or our models can optimize quickly when we are training.$z = \frac{x_i - \mu}{\sigma}$* $z$ = standardized value* $x_i$ = inputs* $\mu$ = mean* $\sigma$ = standard deviation
###Code
def standardize_data(data, mean, std):
return (data - mean)/std
# Determine means and stds
X_mean = np.mean(X_train)
X_std = np.std(X_train)
y_mean = np.mean(y_train)
y_std = np.std(y_train)
###Output
_____no_output_____
###Markdown
We need to treat the validation and test sets as if they were hidden datasets. So we only use the train set to determine the mean and std to avoid biasing our training process.
###Code
# Standardize
X_train = standardize_data(X_train, X_mean, X_std)
y_train = standardize_data(y_train, y_mean, y_std)
X_val = standardize_data(X_val, X_mean, X_std)
y_val = standardize_data(y_val, y_mean, y_std)
X_test = standardize_data(X_test, X_mean, X_std)
y_test = standardize_data(y_test, y_mean, y_std)
# Check (means should be ~0 and std should be ~1)
print (f"X_train: mean: {np.mean(X_train, axis=0)[0]:.1f}, std: {np.std(X_train, axis=0)[0]:.1f}")
print (f"y_train: mean: {np.mean(y_train, axis=0)[0]:.1f}, std: {np.std(y_train, axis=0)[0]:.1f}")
print (f"X_val: mean: {np.mean(X_val, axis=0)[0]:.1f}, std: {np.std(X_val, axis=0)[0]:.1f}")
print (f"y_val: mean: {np.mean(y_val, axis=0)[0]:.1f}, std: {np.std(y_val, axis=0)[0]:.1f}")
print (f"X_test: mean: {np.mean(X_test, axis=0)[0]:.1f}, std: {np.std(X_test, axis=0)[0]:.1f}")
print (f"y_test: mean: {np.mean(y_test, axis=0)[0]:.1f}, std: {np.std(y_test, axis=0)[0]:.1f}")
###Output
X_train: mean: -0.0, std: 1.0
y_train: mean: 0.0, std: 1.0
X_val: mean: -0.5, std: 0.5
y_val: mean: -0.6, std: 0.5
X_test: mean: -0.6, std: 0.9
y_test: mean: -0.6, std: 0.9
###Markdown
Weights Our goal is to learn a linear model $\hat{y}$ that models $y$ given $X$. $\hat{y} = XW + b$* $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$ ($N$ is the number of samples)* $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features)* $W$ = weights | $\in \mathbb{R}^{DX1}$ * $b$ = bias | $\in \mathbb{R}^{1}$ 1. Randomly initialize the model's weights $W$.
###Code
INPUT_DIM = X_train.shape[1] # X is 1-dimensional
OUTPUT_DIM = y_train.shape[1] # y is 1-dimensional
# Initialize random weights
W = 0.01 * np.random.randn(INPUT_DIM, OUTPUT_DIM)
b = np.zeros((1, 1))
print (f"W: {W.shape}")
print (f"b: {b.shape}")
###Output
W: (1, 1)
b: (1, 1)
###Markdown
Modeling Model 2. Feed inputs $X$ into the model to receive the predictions $\hat{y}$. * $\hat{y} = XW + b$
###Code
# Forward pass [NX1] · [1X1] = [NX1]
y_pred = np.dot(X_train, W) + b
print (f"y_pred: {y_pred.shape}")
###Output
y_pred: (35, 1)
###Markdown
Loss 3. Compare the predictions $\hat{y}$ with the actual target values $y$ using the objective (cost) function to determine the loss $J$. A common objective function for linear regression is mean squarred error (MSE). This function calculates the difference between the predicted and target values and squares it. * $J(\theta) = MSE = \frac{1}{N} \sum_{i-1}^{N} (y_i - \hat{y}_i)^2 $ * ${y}$ = ground truth | $\in \mathbb{R}^{NX1}$ * $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$
###Code
# Loss
N = len(y_train)
loss = (1/N) * np.sum((y_train - y_pred)**2)
print (f"loss: {loss:.2f}")
###Output
loss: 0.99
###Markdown
Gradients 4. Calculate the gradient of loss $J(\theta)$ w.r.t to the model weights. * $J(\theta) = \frac{1}{N} \sum_i (y_i - \hat{y}_i)^2 = \frac{1}{N}\sum_i (y_i - X_iW)^2 $ * $\frac{\partial{J}}{\partial{W}} = -\frac{2}{N} \sum_i (y_i - X_iW) X_i = -\frac{2}{N} \sum_i (y_i - \hat{y}_i) X_i$ * $\frac{\partial{J}}{\partial{W}} = -\frac{2}{N} \sum_i (y_i - X_iW)1 = -\frac{2}{N} \sum_i (y_i - \hat{y}_i)1$
###Code
# Backpropagation
dW = -(2/N) * np.sum((y_train - y_pred) * X_train)
db = -(2/N) * np.sum((y_train - y_pred) * 1)
###Output
_____no_output_____
###Markdown
**NOTE**: The gradient is the derivative, or the rate of change of a function. It's a vector that points in the direction of greatest increase of a function. For example the gradient of our loss function ($J$) with respect to our weights ($W$) will tell us how to change W so we can maximize $J$. However, we want to minimize our loss so we subtract the gradient from $W$. Update weights 5. Update the weights $W$ using a small learning rate $\alpha$. * $W = W - \alpha\frac{\partial{J}}{\partial{W}}$ * $b = b - \alpha\frac{\partial{J}}{\partial{b}}$
###Code
LEARNING_RATE = 1e-1
# Update weights
W += -LEARNING_RATE * dW
b += -LEARNING_RATE * db
###Output
_____no_output_____
###Markdown
**NOTE**: The learning rate $\alpha$ is a way to control how much we update the weights by. If we choose a small learning rate, it may take a long time for our model to train. However, if we choose a large learning rate, we may overshoot and our training will never converge. The specific learning rate depends on our data and the type of models we use but it's typically good to explore in the range of $[1e^{-8}, 1e^{-1}]$. We'll explore learning rate update stratagies in later lessons. Training 6. Repeat steps 2 - 5 to minimize the loss and train the model.
###Code
NUM_EPOCHS = 100
# Initialize random weights
W = 0.01 * np.random.randn(INPUT_DIM, OUTPUT_DIM)
b = np.zeros((1, ))
# Training loop
for epoch_num in range(NUM_EPOCHS):
# Forward pass [NX1] · [1X1] = [NX1]
y_pred = np.dot(X_train, W) + b
# Loss
loss = (1/len(y_train)) * np.sum((y_train - y_pred)**2)
# show progress
if epoch_num%10 == 0:
print (f"Epoch: {epoch_num}, loss: {loss:.3f}")
# Backpropagation
dW = -(2/N) * np.sum((y_train - y_pred) * X_train)
db = -(2/N) * np.sum((y_train - y_pred) * 1)
# Update weights
W += -LEARNING_RATE * dW
b += -LEARNING_RATE * db
###Output
Epoch: 0, loss: 0.990
Epoch: 10, loss: 0.039
Epoch: 20, loss: 0.028
Epoch: 30, loss: 0.028
Epoch: 40, loss: 0.028
Epoch: 50, loss: 0.028
Epoch: 60, loss: 0.028
Epoch: 70, loss: 0.028
Epoch: 80, loss: 0.028
Epoch: 90, loss: 0.028
###Markdown
Evaluation
###Code
# Predictions
pred_train = W*X_train + b
pred_test = W*X_test + b
# Train and test MSE
train_mse = np.mean((y_train - pred_train) ** 2)
test_mse = np.mean((y_test - pred_test) ** 2)
print (f"train_MSE: {train_mse:.2f}, test_MSE: {test_mse:.2f}")
# Figure size
plt.figure(figsize=(15,5))
# Plot train data
plt.subplot(1, 2, 1)
plt.title("Train")
plt.scatter(X_train, y_train, label='y_train')
plt.plot(X_train, pred_train, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Plot test data
plt.subplot(1, 2, 2)
plt.title("Test")
plt.scatter(X_test, y_test, label='y_test')
plt.plot(X_test, pred_test, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Show plots
plt.show()
###Output
_____no_output_____
###Markdown
Interpretability Since we standardized our inputs and outputs, our weights were fit to those standardized values. So we need to unstandardize our weights so we can compare it to our true weight (3.5).Note that both X and y were standardized.$\hat{y}_{scaled} = b_{scaled} + \sum_{j=1}^{k}W_{{scaled}_j}x_{{scaled}_j}$* $y_{scaled} = \frac{\hat{y} - \bar{y}}{\sigma_y}$* $x_{scaled} = \frac{x_j - \bar{x}_j}{\sigma_j}$$\frac{\hat{y} - \bar{y}}{\sigma_y} = b_{scaled} + \sum_{j=1}^{k}W_{{scaled}_j}\frac{x_j - \bar{x}_j}{\sigma_j}$$ \hat{y}_{scaled} = \frac{\hat{y}_{unscaled} - \bar{y}}{\sigma_y} = {b_{scaled}} + \sum_{j=1}^{k} {W}_{{scaled}_j} (\frac{x_j - \bar{x}_j}{\sigma_j}) $$\hat{y}_{unscaled} = b_{scaled}\sigma_y + \bar{y} - \sum_{j=1}^{k} {W}_{{scaled}_j}(\frac{\sigma_y}{\sigma_j})\bar{x}_j + \sum_{j=1}^{k}{W}_{{scaled}_j}(\frac{\sigma_y}{\sigma_j})x_j $In the expression above, we can see the expression $\hat{y}_{unscaled} = W_{unscaled}x + b_{unscaled} $ where* $W_{unscaled} = \sum_{j=1}^{k}{W}_j(\frac{\sigma_y}{\sigma_j}) $* $b_{unscaled} = b_{scaled}\sigma_y + \bar{y} - \sum_{j=1}^{k} {W}_j(\frac{\sigma_y}{\sigma_j})\bar{x}_j$
###Code
# Unscaled weights
W_unscaled = W * (y_std/X_std)
b_unscaled = b * y_std + y_mean - np.sum(W_unscaled*X_mean)
print ("[actual] y = 3.5X + noise")
print (f"[model] y_hat = {W_unscaled[0][0]:.1f}X + {b_unscaled[0]:.1f}")
###Output
[actual] y = 3.5X + noise
[model] y_hat = 3.4X + 7.8
###Markdown
TensorFlow + Keras Now that we've implemented linear regression with Numpy, let's do the same with TensorFlow + Keras.
###Code
%tensorflow_version 2.x
import tensorflow as tf
# Set seed for reproducibility
tf.random.set_seed(SEED)
###Output
_____no_output_____
###Markdown
Data Split data When we're working with TensorFlow we normally use the scikit learn's [splitting functions](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.model_selection) to split our data.
###Code
from sklearn.model_selection import train_test_split
TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15
SHUFFLE = True
def train_val_test_split(X, y, val_size, test_size, shuffle):
"""Split data into train/val/test datasets.
"""
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_size, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=val_size, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
###Output
_____no_output_____
###Markdown
The `train_val_test_split` function essentially splits our data twice. First, we separate out the test set. And then we separate the remaining other set into train and validation sets.
###Code
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X, y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_test: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
###Output
X_train: (35, 1), y_train: (35, 1)
X_val: (7, 1), y_test: (7, 1)
X_test: (8, 1), y_test: (8, 1)
###Markdown
Standardize data We can also use scikit learn to do [preprocessing and normalization](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing).
###Code
from sklearn.preprocessing import StandardScaler
# Standardize the data (mean=0, std=1) using training data
X_scaler = StandardScaler().fit(X_train)
y_scaler = StandardScaler().fit(y_train)
# Apply scaler on training and test data
X_train = X_scaler.transform(X_train)
y_train = y_scaler.transform(y_train).ravel().reshape(-1, 1)
X_val = X_scaler.transform(X_val)
y_val = y_scaler.transform(y_val).ravel().reshape(-1, 1)
X_test = X_scaler.transform(X_test)
y_test = y_scaler.transform(y_test).ravel().reshape(-1, 1)
# Check (means should be ~0 and std should be ~1)
print (f"X_train: mean: {np.mean(X_train, axis=0)[0]:.1f}, std: {np.std(X_train, axis=0)[0]:.1f}")
print (f"y_train: mean: {np.mean(y_train, axis=0)[0]:.1f}, std: {np.std(y_train, axis=0)[0]:.1f}")
print (f"X_val: mean: {np.mean(X_val, axis=0)[0]:.1f}, std: {np.std(X_val, axis=0)[0]:.1f}")
print (f"y_val: mean: {np.mean(y_val, axis=0)[0]:.1f}, std: {np.std(y_val, axis=0)[0]:.1f}")
print (f"X_test: mean: {np.mean(X_test, axis=0)[0]:.1f}, std: {np.std(X_test, axis=0)[0]:.1f}")
print (f"y_test: mean: {np.mean(y_test, axis=0)[0]:.1f}, std: {np.std(y_test, axis=0)[0]:.1f}")
###Output
X_train: mean: 0.0, std: 1.0
y_train: mean: -0.0, std: 1.0
X_val: mean: -0.8, std: 0.7
y_val: mean: -0.7, std: 0.6
X_test: mean: -0.0, std: 1.1
y_test: mean: 0.0, std: 0.9
###Markdown
Weights We will be using [Dense layers](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense) in our MLP implementation. The layer applies an activation function on the dot product of the layer's inputs and its weights.$ z = \text{activation}(XW)$
###Code
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
x = Input(shape=(INPUT_DIM,))
fc = Dense(units=OUTPUT_DIM, activation='linear')
z = fc(x)
W, b = fc.weights
print (f"z {z.shape} = x {x.shape} · W {W.shape} + b {b.shape}")
###Output
z (None, 1) = x (None, 1) · W (1, 1) + b (1,)
###Markdown
Modeling Model Our goal is to learn a linear model $\hat{y}$ that models $y$ given $X$. $\hat{y} = XW + b$* $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$ ($N$ is the number of samples)* $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features)* $W$ = weights | $\in \mathbb{R}^{DX1}$ * $b$ = bias | $\in \mathbb{R}^{1}$
###Code
from tensorflow.keras.models import Model
from tensorflow.keras.utils import plot_model
class LinearRegression(Model):
def __init__(self, output_dim):
super(LinearRegression, self).__init__(name='linear_regression')
self.fc1 = Dense(units=output_dim, activation='linear', name='W')
def call(self, x_in, training=False):
y_pred = self.fc1(x_in)
return y_pred
def summary(self, input_shape):
x_in = Input(shape=input_shape, name='X')
summary = Model(inputs=x_in, outputs=self.call(x_in), name=self.name)
summary.summary() # parameter summary
print ("\n\nWEIGHTS:") # weights summary
for layer in self.layers:
print ("_"*50)
print (layer.name)
for w in layer.weights:
print (f"\t{w.name} → {w.shape}")
print ("\n\nFORWARD PASS:")
return plot_model(summary, show_shapes=True) # forward pass
# Initialize model
model = LinearRegression(output_dim=OUTPUT_DIM)
# Summary
model.summary(input_shape=(INPUT_DIM,))
###Output
Model: "linear_regression"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
X (InputLayer) [(None, 1)] 0
_________________________________________________________________
W (Dense) (None, 1) 2
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________
###Markdown
Optimizer When we implemented linear regression with just NumPy, we used batch gradient descent to update our weights. But there are actually many different [gradient descent optimization algorithms](https://ruder.io/optimizing-gradient-descent/) to choose from and it depends on the situation. However, the [ADAM optimizer](https://ruder.io/optimizing-gradient-descent/index.htmlgradientdescentoptimizationalgorithms/adam) has become a standard algorithm for most cases.
###Code
from tensorflow.keras.optimizers import Adam
# Optimizer
optimizer = Adam(lr=0.1)
###Output
_____no_output_____
###Markdown
Loss
###Code
from tensorflow.keras.losses import MeanSquaredError
mse = MeanSquaredError()
loss = mse([0., 0., 1., 1.], [1., 1., 1., 0.])
print('Loss: ', loss.numpy())
###Output
Loss: 0.75
###Markdown
Metrics
###Code
from tensorflow.keras.metrics import MeanAbsolutePercentageError
metric = MeanAbsolutePercentageError()
metric.update_state([0.5, 0.5, 1., 1.], [0.5, 1., 1., 0.])
print('Final result: ', metric.result().numpy())
###Output
Final result: 50.0
###Markdown
Training Here are the full list of options for [optimizer](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers), [loss](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses) and [metrics](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/metrics).
###Code
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=MeanSquaredError(),
metrics=[MeanAbsolutePercentageError()])
###Output
_____no_output_____
###Markdown
When we implemented linear regression from scratch, we used batch gradient descent to update our weights. This means that we calculated the gradients using the entire training dataset. We also could've updated our weights using stochastic gradient descent (SGD) where we pass in one training example at a time. The current standard is mini-batch gradient descent, which strikes a balance between batch and stochastic GD, where we update the weights using a mini-batch of n (`BATCH_SIZE`) samples.
###Code
BATCH_SIZE = 10
# Training
history = model.fit(x=X_train,
y=y_train,
validation_data=(X_val, y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
shuffle=False,
verbose=1)
# Training metrics
print (f"metrics: {list(history.history.keys())}")
print (f"final val_loss: {history.history['val_loss'][-1]:.2f}")
###Output
metrics: ['loss', 'mean_absolute_percentage_error', 'val_loss', 'val_mean_absolute_percentage_error']
final val_loss: 0.02
###Markdown
Evaluation
###Code
# Predictions
pred_train = model.predict(X_train)
pred_test = model.predict(X_test)
# Train metrics
train_mse = tf.keras.metrics.MeanSquaredError()
train_mse.update_state(y_train, pred_train)
print(f'train_mse: {train_mse.result().numpy(): .2f}')
# Test metrics
test_mse = tf.keras.metrics.MeanSquaredError()
test_mse.update_state(y_test, pred_test)
print(f'test_mse: {test_mse.result().numpy(): .2f}')
###Output
test_mse: 0.01
###Markdown
Since we only have one feature, it's easy to visually inspect the model.
###Code
# Figure size
plt.figure(figsize=(15,5))
# Plot train data
plt.subplot(1, 2, 1)
plt.title("Train")
plt.scatter(X_train, y_train, label='y_train')
plt.plot(X_train, pred_train, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Plot test data
plt.subplot(1, 2, 2)
plt.title("Test")
plt.scatter(X_test, y_test, label='y_test')
plt.plot(X_test, pred_test, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Show plots
plt.show()
###Output
_____no_output_____
###Markdown
Inference After training a model, we can use it to predict on new data.
###Code
# Feed in your own inputs
sample_indices = [10, 15, 25]
X_infer = np.array(sample_indices, dtype=np.float32)
standardized_X_infer = X_scaler.transform(X_infer.reshape(-1, 1))
###Output
_____no_output_____
###Markdown
Recall that we need to unstandardize our predictions.$ \hat{y}_{scaled} = \frac{\hat{y} - \mu_{\hat{y}}}{\sigma_{\hat{y}}} $$ \hat{y} = \hat{y}_{scaled} * \sigma_{\hat{y}} + \mu_{\hat{y}} $
###Code
# Unstandardize predictions
pred_infer = model.predict(standardized_X_infer) * np.sqrt(y_scaler.var_) + y_scaler.mean_
for i, index in enumerate(sample_indices):
print (f"{df.iloc[index]['y']:.2f} (actual) → {pred_infer[i][0]:.2f} (predicted)")
###Output
35.73 (actual) → 43.43 (predicted)
59.34 (actual) → 60.42 (predicted)
97.04 (actual) → 94.40 (predicted)
###Markdown
Interpretability Linear regression offers the great advantage of being highly interpretable. Each feature has a coefficient which signifies its importance/impact on the output variable y. We can interpret our coefficient as follows: by increasing X by 1 unit, we increase y by $W$ (~3.65) units.
###Code
# Unstandardize coefficients
W = model.layers[0].get_weights()[0][0][0]
b = model.layers[0].get_weights()[1][0]
W_unscaled = W * (y_scaler.scale_/X_scaler.scale_)
b_unscaled = b * y_scaler.scale_ + y_scaler.mean_ - np.sum(W_unscaled*X_scaler.mean_)
print ("[actual] y = 3.5X + noise")
print (f"[model] y_hat = {W_unscaled[0]:.1f}X + {b_unscaled[0]:.1f}")
###Output
[actual] y = 3.5X + noise
[model] y_hat = 3.4X + 9.4
###Markdown
Regularization Regularization helps decrease overfitting. Below is L2 regularization (ridge regression). There are many forms of regularization but they all work to reduce overfitting in our models. With L2 regularization, we are penalizing the weights with large magnitudes by decaying them. Having certain weights with high magnitudes will lead to preferential bias with the inputs and we want the model to work with all the inputs and not just a select few. There are also other types of regularization like L1 (lasso regression) which is useful for creating sparse models where some feature cofficients are zeroed out, or elastic which combines L1 and L2 penalties. **Note**: Regularization is not just for linear regression. You can use it to regularize any model's weights including the ones we will look at in future lessons. $ J(\theta) = = \frac{1}{2}\sum_{i}(X_iW - y_i)^2 + \frac{\lambda}{2}W^TW$$ \frac{\partial{J}}{\partial{W}} = X (\hat{y} - y) + \lambda W $$W = W- \alpha\frac{\partial{J}}{\partial{W}}$* $\lambda$ is the regularzation coefficient
###Code
from tensorflow.keras.regularizers import l2
L2_LAMBDA = 1e-2
class L2LinearRegression(Model):
def __init__(self, l2_lambda, output_dim):
super(L2LinearRegression, self).__init__(name='l2_linear_regression')
self.fc1 = Dense(units=output_dim, activation='linear',
kernel_regularizer=l2(l=l2_lambda), name='W')
def call(self, x_in, training=False):
y_pred = self.fc1(x_in)
return y_pred
def summary(self, input_shape):
x_in = Input(shape=input_shape, name='X')
summary = Model(inputs=x_in, outputs=self.call(x_in), name=self.name)
summary.summary() # parameter summary
print ("\n\nWEIGHTS:") # weights summary
for layer in self.layers:
print ("_"*50)
print (layer.name)
for w in layer.weights:
print (f"\t{w.name} → {w.shape}")
print ("\n\nFORWARD PASS:")
return plot_model(summary, show_shapes=True) # forward pass
# Initialize model
model = L2LinearRegression(l2_lambda=L2_LAMBDA, output_dim=OUTPUT_DIM)
# Summary
model.summary(input_shape=(INPUT_DIM,))
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=MeanSquaredError(),
metrics=[MeanAbsolutePercentageError()])
# Training
model.fit(x=X_train,
y=y_train,
validation_data=(X_val, y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
shuffle=SHUFFLE,
verbose=1)
# Predictions
pred_train = model.predict(X_train)
pred_test = model.predict(X_test)
# Train metrics
train_mse = tf.keras.metrics.MeanSquaredError()
train_mse.update_state(y_train, pred_train)
print(f'train_mse: {train_mse.result().numpy(): .2f}')
# Test metrics
test_mse = tf.keras.metrics.MeanSquaredError()
test_mse.update_state(y_test, pred_test)
print(f'test_mse: {test_mse.result().numpy(): .2f}')
###Output
test_mse: 0.01
###Markdown
Linear RegressionIn this lesson we will learn about linear regression. We will understand the basic math behind it, implement it in Python. and then look at ways of interpreting the linear model. View on practicalAI Run in Google Colab View code on GitHub Overview Our goal is to learn a linear model $\hat{y}$ that models $y$ given $X$. $\hat{y} = XW + b$* $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$ ($N$ is the number of samples)* $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features)* $W$ = weights | $\in \mathbb{R}^{DX1}$ * $b$ = bias | $\in \mathbb{R}^{1}$ * **Objective:** Use inputs $X$ to predict the output $\hat{y}$ using a linear model. The model will be a line of best fit that minimizes the distance between the predicted (model's output) and target (ground truth) values. Training data $(X, y)$ is used to train the model and learn the weights $W$ using gradient descent.* **Advantages:** * Computationally simple. * Highly interpretable. * Can account for continuous and categorical features.* **Disadvantages:** * The model will perform well only when the data is linearly separable (for classification). * Usually not used for classification and only for regression.* **Miscellaneous:** You can also use linear regression for binary classification tasks where if the predicted continuous value is above a threshold, it belongs to a certain class. But we will cover better techniques for classification in future lessons and will focus on linear regression for continuous regression tasks only. Training 1. Randomly initialize the model's weights $W$ (we'll cover more effective initialization strategies in future lessons).2. Feed inputs $X$ into the model to receive the predictions $\hat{y}$. * $\hat{y} = XW + b$3. Compare the predictions $\hat{y}$ with the actual target values $y$ using the objective (cost) function to determine the loss $J$. A common objective function for linear regression is mean squarred error (MSE). This function calculates the difference between the predicted and target values and squares it. * $J(\theta) = MSE = \frac{1}{N} \sum_i (y_i - \hat{y}_i)^2 $ * ${y}$ = ground truth | $\in \mathbb{R}^{NX1}$ * $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$4. Calculate the gradient of loss $J(\theta)$ w.r.t to the model weights. * $J(\theta) = \frac{1}{N} \sum_i (y_i - \hat{y}_i)^2 = \frac{1}{N}\sum_i (y_i - X_iW)^2 $ * $\frac{\partial{J}}{\partial{W}} = -\frac{2}{N} \sum_i (y_i - X_iW) X_i = -\frac{2}{N} \sum_i (y_i - \hat{y}_i) X_i$ * $\frac{\partial{J}}{\partial{b}} = -\frac{2}{N} \sum_i (y_i - X_iW)1 = -\frac{2}{N} \sum_i (y_i - \hat{y}_i)1$5. Update the weights $W$ using a small learning rate $\alpha$. The simplified intuition is that the gradient tells you the direction for how to increase something so subtracting it will help you go the other way since we want to decrease loss $J(\theta)$: * $W = W - \alpha\frac{\partial{J}}{\partial{W}}$ * $b = b - \alpha\frac{\partial{J}}{\partial{b}}$6. Repeat steps 2 - 5 to minimize the loss and train the model. Set up
###Code
# Use TensorFlow 2.x
%tensorflow_version 2.x
import os
import numpy as np
import tensorflow as tf
# Arguments
SEED = 1234
SHUFFLE = True
NUM_SAMPLES = 50
TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15
NUM_EPOCHS = 100
BATCH_SIZE = 10
INPUT_DIM = 1
HIDDEN_DIM = 1
LEARNING_RATE = 1e-1
# Set seed for reproducibility
np.random.seed(SEED)
tf.random.set_seed(SEED)
###Output
_____no_output_____
###Markdown
Data We're going to create some simple dummy data to apply linear regression on.
###Code
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Components We're going to create some dummy data to train our linear regression model on. We're going to create roughly linear data (`y = 3.5X + 10`) with some random noise so the points don't all align in a straight line. Our goal is to have the model converge to a similar linear equation (there will be slight variance since we added some noise).
###Code
# Generate synthetic data
def generate_data(num_samples):
"""Generate dummy data for linear regression."""
X = np.array(range(num_samples))
random_noise = np.random.uniform(-10,20,size=num_samples)
y = 3.5*X + random_noise # add some noise
return X, y
###Output
_____no_output_____
###Markdown
Operations Now let's load the data onto a dataframe and visualize it.
###Code
# Generate random (linear) data
X, y = generate_data(num_samples=NUM_SAMPLES)
data = np.vstack([X, y]).T
df = pd.DataFrame(data, columns=['X', 'y'])
X = df[['X']].values
y = df[['y']].values
df.head()
# Scatter plot
plt.title("Generated data")
plt.scatter(x=df['X'], y=df['y'])
plt.show()
###Output
_____no_output_____
###Markdown
Split data
###Code
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Components Since our task is a regression task, we will randomly split our dataset into **three** sets: train, validation and test data splits.* train: used to train our model.* val : used to validate our model's performance during training.* test: used to do an evaluation of our fully trained model. Splitting the data for classification tasks are a bit different in that we want similar class distributions in each data split. We'll see this in action in the next lesson.
###Code
def train_val_test_split(X, y, val_size, test_size, shuffle):
"""Split data into train/val/test datasets.
"""
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_size, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=val_size, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
###Output
_____no_output_____
###Markdown
Operations
###Code
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X, y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_test: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"X_train[0]: {X_train[0]}")
print (f"y_train[0]: {y_train[0]}")
###Output
X_train: (35, 1), y_train: (35, 1)
X_val: (7, 1), y_test: (7, 1)
X_test: (8, 1), y_test: (8, 1)
X_train[0]: [12.]
y_train[0]: [52.50388806]
###Markdown
Standardize data We need to standardize our data (zero mean and unit variance) in order to optimize quickly. $z = \frac{x_i - \mu}{\sigma}$* $z$ = standardized value* $x_i$ = inputs* $\mu$ = mean* $\sigma$ = standard deviation
###Code
from sklearn.preprocessing import StandardScaler
# Standardize the data (mean=0, std=1) using training data
X_scaler = StandardScaler().fit(X_train)
y_scaler = StandardScaler().fit(y_train)
# Apply scaler on training and test data
standardized_X_train = X_scaler.transform(X_train)
standardized_y_train = y_scaler.transform(y_train).ravel()
standardized_X_val = X_scaler.transform(X_val)
standardized_y_val = y_scaler.transform(y_val).ravel()
standardized_X_test = X_scaler.transform(X_test)
standardized_y_test = y_scaler.transform(y_test).ravel()
# Check (means should be ~0 and std should be ~1)
print (f"standardized_X_train: mean: {np.mean(standardized_X_train, axis=0)[0]}, std: {np.std(standardized_X_train, axis=0)[0]}")
print (f"standardized_y_train: mean: {np.mean(standardized_y_train, axis=0)}, std: {np.std(standardized_y_train, axis=0)}")
print (f"standardized_X_val: mean: {np.mean(standardized_X_val, axis=0)[0]}, std: {np.std(standardized_X_val, axis=0)[0]}")
print (f"standardized_y_val: mean: {np.mean(standardized_y_val, axis=0)}, std: {np.std(standardized_y_val, axis=0)}")
print (f"standardized_X_test: mean: {np.mean(standardized_X_test, axis=0)[0]}, std: {np.std(standardized_X_test, axis=0)[0]}")
print (f"standardized_y_test: mean: {np.mean(standardized_y_test, axis=0)}, std: {np.std(standardized_y_test, axis=0)}")
###Output
standardized_X_train: mean: -1.2688263138573217e-16, std: 1.0
standardized_y_train: mean: 8.961085841617335e-17, std: 1.0
standardized_X_val: mean: 0.455275148918323, std: 0.8941749819083381
standardized_y_val: mean: 0.48468545395300805, std: 0.9785039908565315
standardized_X_test: mean: -0.44282621906508784, std: 1.0652341142978086
standardized_y_test: mean: -0.4940327711822252, std: 1.0650188983159736
###Markdown
From scratchBefore we use TensorFlow 2.0 + Keras we will implement linear regression from scratch using NumPy so we can:1. Absorb the fundamental concepts by implementing from scratch2. Appreciate the level of abstraction TensorFlow providesIt's normal to find the math and code in this section slightly complex. You can still read each of the steps to build intuition for when we implement this using TensorFlow + Keras.
###Code
standardized_y_train = standardized_y_train.reshape(-1, 1)
print (f"X: {standardized_X_train.shape}")
print (f"y: {standardized_y_train.shape}")
###Output
X: (35, 1)
y: (35, 1)
###Markdown
Our goal is to learn a linear model $\hat{y}$ that models $y$ given $X$. $\hat{y} = XW + b$* $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$ ($N$ is the number of samples)* $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features)* $W$ = weights | $\in \mathbb{R}^{DX1}$ * $b$ = bias | $\in \mathbb{R}^{1}$ 1. Randomly initialize the model's weights $W$.
###Code
# Initialize random weights
W = 0.01 * np.random.randn(INPUT_DIM, 1)
b = np.zeros((1, 1))
print (f"W: {W.shape}")
print (f"b: {b.shape}")
###Output
W: (1, 1)
b: (1, 1)
###Markdown
2. Feed inputs $X$ into the model to receive the predictions $\hat{y}$. * $\hat{y} = XW + b$
###Code
# Forward pass [NX1] · [1X1] = [NX1]
y_hat = np.dot(standardized_X_train, W) + b
print (f"y_hat: {y_hat.shape}")
###Output
y_hat: (35, 1)
###Markdown
3. Compare the predictions $\hat{y}$ with the actual target values $y$ using the objective (cost) function to determine the loss $J$. A common objective function for linear regression is mean squarred error (MSE). This function calculates the difference between the predicted and target values and squares it. * $J(\theta) = MSE = \frac{1}{N} \sum_{i-1}^{N} (y_i - \hat{y}_i)^2 $ * ${y}$ = ground truth | $\in \mathbb{R}^{NX1}$ * $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$
###Code
# Loss
N = len(standardized_y_train)
loss = (1/N) * np.sum((standardized_y_train - y_hat)**2)
print (f"loss: {loss:.2f}")
###Output
loss: 1.02
###Markdown
4. Calculate the gradient of loss $J(\theta)$ w.r.t to the model weights. * $J(\theta) = \frac{1}{N} \sum_i (y_i - \hat{y}_i)^2 = \frac{1}{N}\sum_i (y_i - X_iW)^2 $ * $\frac{\partial{J}}{\partial{W}} = -\frac{2}{N} \sum_i (y_i - X_iW) X_i = -\frac{2}{N} \sum_i (y_i - \hat{y}_i) X_i$ * $\frac{\partial{J}}{\partial{W}} = -\frac{2}{N} \sum_i (y_i - X_iW)1 = -\frac{2}{N} \sum_i (y_i - \hat{y}_i)1$
###Code
# Backpropagation
dW = -(2/N) * np.sum((standardized_y_train - y_hat) * standardized_X_train)
db = -(2/N) * np.sum((standardized_y_train - y_hat) * 1)
###Output
_____no_output_____
###Markdown
5. Update the weights $W$ using a small learning rate $\alpha$. The simplified intuition is that the gradient tells you the direction for how to increase something so subtracting it will help you go the other way since we want to decrease loss $J(\theta)$: * $W = W - \alpha\frac{\partial{J}}{\partial{W}}$ * $b = b - \alpha\frac{\partial{J}}{\partial{b}}$
###Code
# Update weights
W += -LEARNING_RATE * dW
b += -LEARNING_RATE * db
###Output
_____no_output_____
###Markdown
6. Repeat steps 2 - 5 to minimize the loss and train the model.
###Code
# Initialize random weights
W = 0.01 * np.random.randn(INPUT_DIM, 1)
b = np.zeros((1, ))
# Training loop
for epoch_num in range(NUM_EPOCHS):
# Forward pass [NX1] · [1X1] = [NX1]
y_hat = np.dot(standardized_X_train, W) + b
# Loss
loss = (1/len(standardized_y_train)) * np.sum((standardized_y_train - y_hat)**2)
# show progress
if epoch_num%10 == 0:
print (f"Epoch: {epoch_num}, loss: {loss:.3f}")
# Backpropagation
dW = -(2/N) * np.sum((standardized_y_train - y_hat) * standardized_X_train)
db = -(2/N) * np.sum((standardized_y_train - y_hat) * 1)
# Update weights
W += -LEARNING_RATE * dW
b += -LEARNING_RATE * db
# Predictions
pred_train = W*standardized_X_train + b
pred_test = W*standardized_X_test + b
# Train and test MSE
train_mse = np.mean((standardized_y_train - pred_train) ** 2)
test_mse = np.mean((standardized_y_test - pred_test) ** 2)
print (f"train_MSE: {train_mse:.2f}, test_MSE: {test_mse:.2f}")
# Figure size
plt.figure(figsize=(15,5))
# Plot train data
plt.subplot(1, 2, 1)
plt.title("Train")
plt.scatter(standardized_X_train, standardized_y_train, label='y_train')
plt.plot(standardized_X_train, pred_train, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Plot test data
plt.subplot(1, 2, 2)
plt.title("Test")
plt.scatter(standardized_X_test, standardized_y_test, label='y_test')
plt.plot(standardized_X_test, pred_test, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Show plots
plt.show()
###Output
_____no_output_____
###Markdown
Since we standardized our inputs and outputs, our weights were fit to those standardized values. So we need to unstandardize our weights so we can compare it to our true weight (3.5).Note that both X and y were standardized.$\hat{y}_{scaled} = b_{scaled} + \sum_{j=1}^{k}W_{{scaled}_j}x_{{scaled}_j}$* $y_{scaled} = \frac{\hat{y} - \bar{y}}{\sigma_y}$* $x_{scaled} = \frac{x_j - \bar{x}_j}{\sigma_j}$$\frac{\hat{y} - \bar{y}}{\sigma_y} = b_{scaled} + \sum_{j=1}^{k}W_{{scaled}_j}\frac{x_j - \bar{x}_j}{\sigma_j}$$ \hat{y}_{scaled} = \frac{\hat{y}_{unscaled} - \bar{y}}{\sigma_y} = {b_{scaled}} + \sum_{j=1}^{k} {W}_{{scaled}_j} (\frac{x_j - \bar{x}_j}{\sigma_j}) $$\hat{y}_{unscaled} = b_{scaled}\sigma_y + \bar{y} - \sum_{j=1}^{k} {W}_{{scaled}_j}(\frac{\sigma_y}{\sigma_j})\bar{x}_j + \sum_{j=1}^{k}{W}_{{scaled}_j}(\frac{\sigma_y}{\sigma_j})x_j $In the expression above, we can see the expression $\hat{y}_{unscaled} = W_{unscaled}x + b_{unscaled} $ where* $W_{unscaled} = \sum_{j=1}^{k}{W}_j(\frac{\sigma_y}{\sigma_j}) $* $b_{unscaled} = b_{scaled}\sigma_y + \bar{y} - \sum_{j=1}^{k} {W}_j(\frac{\sigma_y}{\sigma_j})\bar{x}_j$
###Code
# Unscaled weights
W_unscaled = W * (y_scaler.scale_/X_scaler.scale_)
b_unscaled = b * y_scaler.scale_ + y_scaler.mean_ - np.sum(W_unscaled*X_scaler.mean_)
print ("[actual] y = 3.5X + noise")
print (f"[model] y_hat = {W_unscaled[0][0]:.1f}X + {b_unscaled[0]:.1f}")
###Output
[actual] y = 3.5X + noise
[model] y_hat = 3.4X + 9.5
###Markdown
Now let's implement linear regression with TensorFlow + Keras. TensorFlow + Keras We will be using [Dense layers](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense) in our MLP implementation. The layer applies an activation function on the dot product of the layer's inputs and its weights.$ z = \text{activation}(XW)$
###Code
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
x = Input(shape=(INPUT_DIM,))
fc = Dense(units=HIDDEN_DIM, activation='linear')
z = fc(x)
W, b = fc.weights
print (f"z {z.shape} = x {x.shape} · W {W.shape} + b {b.shape}")
###Output
z (None, 1) = x (None, 1) · W (1, 1) + b (1,)
###Markdown
Model
###Code
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.losses import MeanSquaredError
from tensorflow.keras.metrics import MeanAbsolutePercentageError
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
# Linear regression
class LinearRegression(Model):
def __init__(self, hidden_dim):
super(LinearRegression, self).__init__()
self.fc1 = Dense(units=hidden_dim, activation='linear')
def call(self, x_in, training=False):
"""Forward pass."""
y_pred = self.fc1(x_in)
return y_pred
def sample(self, input_shape):
x_in = Input(shape=input_shape)
return Model(inputs=x_in, outputs=self.call(x_in)).summary()
# Initialize the model
model = LinearRegression(hidden_dim=HIDDEN_DIM)
model.sample(input_shape=(INPUT_DIM,))
###Output
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 1)] 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 2
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________
###Markdown
When we implemented linear regression from scratch, we used batch gradient descent to update our weights. But there are actually many different [gradient descent optimization algorithms](https://ruder.io/optimizing-gradient-descent/) to choose from and it depends on the situation. However, the [ADAM optimizer](https://ruder.io/optimizing-gradient-descent/index.htmlgradientdescentoptimizationalgorithms/adam) has become a standard algorithm for most cases.
###Code
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=MeanSquaredError(),
metrics=[MeanAbsolutePercentageError()])
###Output
_____no_output_____
###Markdown
Here are the full list of options for [optimizer](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers), [loss](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses) and [metrics](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/metrics). Training When we implemented linear regression from scratch, we used batch gradient descent to update our weights. This means that we calculated the gradients using the entire training dataset. We also could've updated our weights using stochastic gradient descent (SGD) where we pass in one training example at a time. The current standard is mini-batch gradient descent, which strikes a balance between batch and stochastic GD, where we update the weights using a mini-batch of n (`BATCH_SIZE`) samples.
###Code
# Training
model.fit(x=standardized_X_train,
y=standardized_y_train,
validation_data=(standardized_X_val, standardized_y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
shuffle=False,
verbose=1)
###Output
Train on 35 samples, validate on 7 samples
Epoch 1/100
35/35 [==============================] - 1s 29ms/sample - loss: 2.2303 - mean_absolute_percentage_error: 189.2383 - val_loss: 1.8631 - val_mean_absolute_percentage_error: 110.1029
Epoch 2/100
35/35 [==============================] - 0s 604us/sample - loss: 1.1683 - mean_absolute_percentage_error: 129.5676 - val_loss: 0.8827 - val_mean_absolute_percentage_error: 85.6471
Epoch 3/100
35/35 [==============================] - 0s 650us/sample - loss: 0.5139 - mean_absolute_percentage_error: 72.2302 - val_loss: 0.3088 - val_mean_absolute_percentage_error: 63.4733
Epoch 4/100
35/35 [==============================] - 0s 675us/sample - loss: 0.1775 - mean_absolute_percentage_error: 58.3600 - val_loss: 0.0744 - val_mean_absolute_percentage_error: 43.0542
Epoch 5/100
35/35 [==============================] - 0s 635us/sample - loss: 0.0590 - mean_absolute_percentage_error: 55.3533 - val_loss: 0.0241 - val_mean_absolute_percentage_error: 29.1076
Epoch 6/100
35/35 [==============================] - 0s 724us/sample - loss: 0.0501 - mean_absolute_percentage_error: 52.8501 - val_loss: 0.0298 - val_mean_absolute_percentage_error: 21.7504
Epoch 7/100
35/35 [==============================] - 0s 718us/sample - loss: 0.0740 - mean_absolute_percentage_error: 54.2517 - val_loss: 0.0345 - val_mean_absolute_percentage_error: 17.8960
Epoch 8/100
35/35 [==============================] - 0s 685us/sample - loss: 0.0897 - mean_absolute_percentage_error: 51.1155 - val_loss: 0.0292 - val_mean_absolute_percentage_error: 17.8972
Epoch 9/100
35/35 [==============================] - 0s 693us/sample - loss: 0.0826 - mean_absolute_percentage_error: 46.4416 - val_loss: 0.0204 - val_mean_absolute_percentage_error: 16.6245
Epoch 10/100
35/35 [==============================] - 0s 745us/sample - loss: 0.0588 - mean_absolute_percentage_error: 41.9437 - val_loss: 0.0161 - val_mean_absolute_percentage_error: 16.1506
Epoch 11/100
35/35 [==============================] - 0s 735us/sample - loss: 0.0367 - mean_absolute_percentage_error: 38.9170 - val_loss: 0.0205 - val_mean_absolute_percentage_error: 22.4666
Epoch 12/100
35/35 [==============================] - 0s 719us/sample - loss: 0.0285 - mean_absolute_percentage_error: 38.1202 - val_loss: 0.0309 - val_mean_absolute_percentage_error: 28.5176
Epoch 13/100
35/35 [==============================] - 0s 727us/sample - loss: 0.0314 - mean_absolute_percentage_error: 38.2243 - val_loss: 0.0406 - val_mean_absolute_percentage_error: 31.7614
Epoch 14/100
35/35 [==============================] - 0s 702us/sample - loss: 0.0355 - mean_absolute_percentage_error: 37.9354 - val_loss: 0.0450 - val_mean_absolute_percentage_error: 32.1937
Epoch 15/100
35/35 [==============================] - 0s 699us/sample - loss: 0.0358 - mean_absolute_percentage_error: 36.5390 - val_loss: 0.0433 - val_mean_absolute_percentage_error: 30.6931
Epoch 16/100
35/35 [==============================] - 0s 631us/sample - loss: 0.0333 - mean_absolute_percentage_error: 35.0792 - val_loss: 0.0375 - val_mean_absolute_percentage_error: 28.4353
Epoch 17/100
35/35 [==============================] - 0s 626us/sample - loss: 0.0305 - mean_absolute_percentage_error: 34.9512 - val_loss: 0.0306 - val_mean_absolute_percentage_error: 26.3657
Epoch 18/100
35/35 [==============================] - 0s 634us/sample - loss: 0.0289 - mean_absolute_percentage_error: 35.9414 - val_loss: 0.0253 - val_mean_absolute_percentage_error: 24.9581
Epoch 19/100
35/35 [==============================] - 0s 659us/sample - loss: 0.0284 - mean_absolute_percentage_error: 37.1964 - val_loss: 0.0226 - val_mean_absolute_percentage_error: 24.2765
Epoch 20/100
35/35 [==============================] - 0s 682us/sample - loss: 0.0283 - mean_absolute_percentage_error: 37.8857 - val_loss: 0.0221 - val_mean_absolute_percentage_error: 24.1799
Epoch 21/100
35/35 [==============================] - 0s 725us/sample - loss: 0.0283 - mean_absolute_percentage_error: 37.7560 - val_loss: 0.0233 - val_mean_absolute_percentage_error: 24.4904
Epoch 22/100
35/35 [==============================] - 0s 754us/sample - loss: 0.0282 - mean_absolute_percentage_error: 37.1553 - val_loss: 0.0253 - val_mean_absolute_percentage_error: 25.0431
Epoch 23/100
35/35 [==============================] - 0s 646us/sample - loss: 0.0284 - mean_absolute_percentage_error: 36.6211 - val_loss: 0.0271 - val_mean_absolute_percentage_error: 25.6636
Epoch 24/100
35/35 [==============================] - 0s 604us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.4372 - val_loss: 0.0282 - val_mean_absolute_percentage_error: 26.1652
Epoch 25/100
35/35 [==============================] - 0s 721us/sample - loss: 0.0288 - mean_absolute_percentage_error: 36.5204 - val_loss: 0.0284 - val_mean_absolute_percentage_error: 26.4022
Epoch 26/100
35/35 [==============================] - 0s 824us/sample - loss: 0.0288 - mean_absolute_percentage_error: 36.6643 - val_loss: 0.0280 - val_mean_absolute_percentage_error: 26.3404
Epoch 27/100
35/35 [==============================] - 0s 869us/sample - loss: 0.0287 - mean_absolute_percentage_error: 36.7185 - val_loss: 0.0274 - val_mean_absolute_percentage_error: 26.0752
Epoch 28/100
35/35 [==============================] - 0s 1ms/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6859 - val_loss: 0.0269 - val_mean_absolute_percentage_error: 25.7720
Epoch 29/100
35/35 [==============================] - 0s 715us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.6579 - val_loss: 0.0265 - val_mean_absolute_percentage_error: 25.5705
Epoch 30/100
35/35 [==============================] - 0s 659us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.6927 - val_loss: 0.0263 - val_mean_absolute_percentage_error: 25.5202
Epoch 31/100
35/35 [==============================] - 0s 679us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7627 - val_loss: 0.0263 - val_mean_absolute_percentage_error: 25.5831
Epoch 32/100
35/35 [==============================] - 0s 677us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.8006 - val_loss: 0.0265 - val_mean_absolute_percentage_error: 25.6859
Epoch 33/100
35/35 [==============================] - 0s 773us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7741 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7713
Epoch 34/100
35/35 [==============================] - 0s 788us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7127 - val_loss: 0.0270 - val_mean_absolute_percentage_error: 25.8174
Epoch 35/100
35/35 [==============================] - 0s 727us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6695 - val_loss: 0.0270 - val_mean_absolute_percentage_error: 25.8273
Epoch 36/100
35/35 [==============================] - 0s 678us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6699 - val_loss: 0.0270 - val_mean_absolute_percentage_error: 25.8115
Epoch 37/100
35/35 [==============================] - 0s 655us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6975 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7807
Epoch 38/100
35/35 [==============================] - 0s 688us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7206 - val_loss: 0.0267 - val_mean_absolute_percentage_error: 25.7473
Epoch 39/100
35/35 [==============================] - 0s 669us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7241 - val_loss: 0.0267 - val_mean_absolute_percentage_error: 25.7243
Epoch 40/100
35/35 [==============================] - 0s 811us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7156 - val_loss: 0.0267 - val_mean_absolute_percentage_error: 25.7198
Epoch 41/100
35/35 [==============================] - 0s 761us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7096 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7316
Epoch 42/100
35/35 [==============================] - 0s 639us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7101 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7490
Epoch 43/100
35/35 [==============================] - 0s 689us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7116 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7608
Epoch 44/100
35/35 [==============================] - 0s 710us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7089 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7625
Epoch 45/100
35/35 [==============================] - 0s 648us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7040 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7570
Epoch 46/100
35/35 [==============================] - 0s 667us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7018 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7497
Epoch 47/100
35/35 [==============================] - 0s 676us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7040 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7444
Epoch 48/100
35/35 [==============================] - 0s 647us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7073 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7419
Epoch 49/100
35/35 [==============================] - 0s 648us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7081 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7416
Epoch 50/100
35/35 [==============================] - 0s 724us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7062 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7428
Epoch 51/100
35/35 [==============================] - 0s 748us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7037 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7443
Epoch 52/100
35/35 [==============================] - 0s 832us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7026 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7454
Epoch 53/100
35/35 [==============================] - 0s 761us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7026 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7450
Epoch 54/100
35/35 [==============================] - 0s 683us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7025 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7435
Epoch 55/100
35/35 [==============================] - 0s 678us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7019 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7415
Epoch 56/100
35/35 [==============================] - 0s 708us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7013 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7400
Epoch 57/100
35/35 [==============================] - 0s 679us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7010 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7394
Epoch 58/100
35/35 [==============================] - 0s 641us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7008 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7392
Epoch 59/100
35/35 [==============================] - 0s 701us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7003 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7390
Epoch 60/100
35/35 [==============================] - 0s 713us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6995 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7387
Epoch 61/100
35/35 [==============================] - 0s 747us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6988 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7381
Epoch 62/100
35/35 [==============================] - 0s 685us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6983 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7374
Epoch 63/100
35/35 [==============================] - 0s 639us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6980 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7366
Epoch 64/100
35/35 [==============================] - 0s 701us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6976 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7358
Epoch 65/100
35/35 [==============================] - 0s 722us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6971 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7351
Epoch 66/100
35/35 [==============================] - 0s 689us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6965 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7346
Epoch 67/100
35/35 [==============================] - 0s 643us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6961 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7341
Epoch 68/100
35/35 [==============================] - 0s 673us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6956 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7336
Epoch 69/100
35/35 [==============================] - 0s 644us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6951 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7330
Epoch 70/100
35/35 [==============================] - 0s 705us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6946 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7323
Epoch 71/100
35/35 [==============================] - 0s 673us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6941 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7317
Epoch 72/100
35/35 [==============================] - 0s 680us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6937 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7311
Epoch 73/100
35/35 [==============================] - 0s 626us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6932 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7305
Epoch 74/100
35/35 [==============================] - 0s 801us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6927 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7300
Epoch 75/100
35/35 [==============================] - 0s 627us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6922 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7294
Epoch 76/100
35/35 [==============================] - 0s 658us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6918 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7288
Epoch 77/100
35/35 [==============================] - 0s 762us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6913 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7283
Epoch 78/100
35/35 [==============================] - 0s 757us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6908 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7277
Epoch 79/100
35/35 [==============================] - 0s 739us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6904 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7271
Epoch 80/100
35/35 [==============================] - 0s 764us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6899 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7266
Epoch 81/100
35/35 [==============================] - 0s 716us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6894 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7260
Epoch 82/100
35/35 [==============================] - 0s 757us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6890 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7255
Epoch 83/100
35/35 [==============================] - 0s 836us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6885 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7249
Epoch 84/100
35/35 [==============================] - 0s 754us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6881 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7244
Epoch 85/100
35/35 [==============================] - 0s 765us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6876 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7238
Epoch 86/100
35/35 [==============================] - 0s 741us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6872 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7233
Epoch 87/100
35/35 [==============================] - 0s 686us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6867 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7227
Epoch 88/100
35/35 [==============================] - 0s 828us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6862 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7222
Epoch 89/100
35/35 [==============================] - 0s 730us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6858 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7217
Epoch 90/100
35/35 [==============================] - 0s 711us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6853 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7211
Epoch 91/100
35/35 [==============================] - 0s 755us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6849 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7206
Epoch 92/100
35/35 [==============================] - 0s 799us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6844 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7201
Epoch 93/100
35/35 [==============================] - 0s 914us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6840 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7196
Epoch 94/100
35/35 [==============================] - 0s 733us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6836 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7190
Epoch 95/100
35/35 [==============================] - 0s 768us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6831 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7185
Epoch 96/100
35/35 [==============================] - 0s 755us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6827 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7180
Epoch 97/100
35/35 [==============================] - 0s 750us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6822 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7175
Epoch 98/100
35/35 [==============================] - 0s 1ms/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6818 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7170
Epoch 99/100
35/35 [==============================] - 0s 719us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6813 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7165
Epoch 100/100
35/35 [==============================] - 0s 779us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6809 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7160
###Markdown
Evaluation There are several evaluation techniques to see how well our model performed. A common one for linear regression is mean squarred error.
###Code
# Predictions
pred_train = model.predict(standardized_X_train)
pred_test = model.predict(standardized_X_test)
# Train and test MSE
train_mse = np.mean((standardized_y_train - pred_train) ** 2)
test_mse = np.mean((standardized_y_test - pred_test) ** 2)
print (f"train_MSE: {train_mse:.2f}, test_MSE: {test_mse:.2f}")
###Output
train_MSE: 0.03, test_MSE: 2.21
###Markdown
Since we only have one feature, it's easy to visually inspect the model.
###Code
# Figure size
plt.figure(figsize=(15,5))
# Plot train data
plt.subplot(1, 2, 1)
plt.title("Train")
plt.scatter(standardized_X_train, standardized_y_train, label='y_train')
plt.plot(standardized_X_train, pred_train, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Plot test data
plt.subplot(1, 2, 2)
plt.title("Test")
plt.scatter(standardized_X_test, standardized_y_test, label='y_test')
plt.plot(standardized_X_test, pred_test, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Show plots
plt.show()
###Output
_____no_output_____
###Markdown
Inference After training a model, we can use it to predict on new data.
###Code
# Feed in your own inputs
sample_indices = [10, 15, 25]
X_infer = np.array(sample_indices, dtype=np.float32)
standardized_X_infer = X_scaler.transform(X_infer.reshape(-1, 1))
###Output
_____no_output_____
###Markdown
Recall that we need to unstandardize our predictions.$ \hat{y}_{scaled} = \frac{\hat{y} - \mu_{\hat{y}}}{\sigma_{\hat{y}}} $$ \hat{y} = \hat{y}_{scaled} * \sigma_{\hat{y}} + \mu_{\hat{y}} $
###Code
# Unstandardize predictions
pred_infer = model.predict(standardized_X_infer) * np.sqrt(y_scaler.var_) + y_scaler.mean_
for i, index in enumerate(sample_indices):
print (f"{df.iloc[index]['y']:.2f} (actual) → {pred_infer[i][0]:.2f} (predicted)")
###Output
35.73 (actual) → 43.49 (predicted)
59.34 (actual) → 60.03 (predicted)
97.04 (actual) → 93.09 (predicted)
###Markdown
Interpretability Linear regression offers the great advantage of being highly interpretable. Each feature has a coefficient which signifies its importance/impact on the output variable y. We can interpret our coefficient as follows: by increasing X by 1 unit, we increase y by $W$ (~3.65) units. **Note**: Since we standardized our inputs and outputs for gradient descent, we need to apply an operation to our coefficients and intercept to interpret them. See proof in the `From scratch` section above.
###Code
# Unstandardize coefficients (proof is in the `From Scratch` section above)
W = model.layers[0].get_weights()[0][0][0]
b = model.layers[0].get_weights()[1][0]
W_unscaled = W * (y_scaler.scale_/X_scaler.scale_)
b_unscaled = b * y_scaler.scale_ + y_scaler.mean_ - np.sum(W_unscaled*X_scaler.mean_)
print ("[actual] y = 3.5X + noise")
print (f"[model] y_hat = {W_unscaled[0]:.1f}X + {b_unscaled[0]:.1f}")
###Output
[actual] y = 3.5X + noise
[model] y_hat = 3.3X + 10.4
###Markdown
Regularization Regularization helps decrease overfitting. Below is L2 regularization (ridge regression). There are many forms of regularization but they all work to reduce overfitting in our models. With L2 regularization, we are penalizing the weights with large magnitudes by decaying them. Having certain weights with high magnitudes will lead to preferential bias with the inputs and we want the model to work with all the inputs and not just a select few. There are also other types of regularization like L1 (lasso regression) which is useful for creating sparse models where some feature cofficients are zeroed out, or elastic which combines L1 and L2 penalties. **Note**: Regularization is not just for linear regression. You can use it to regularize any model's weights including the ones we will look at in future lessons. $ J(\theta) = = \frac{1}{2}\sum_{i}(X_iW - y_i)^2 + \frac{\lambda}{2}W^TW$$ \frac{\partial{J}}{\partial{W}} = X (\hat{y} - y) + \lambda W $$W = W- \alpha\frac{\partial{J}}{\partial{W}}$* $\lambda$ is the regularzation coefficient
###Code
from tensorflow.keras.regularizers import l2
L2_LAMBDA = 1e-2
# Linear model with L2 regularization
class LinearRegressionL2Regularization(Model):
def __init__(self, hidden_dim):
super(LinearRegressionL2Regularization, self).__init__()
self.fc1 = Dense(units=hidden_dim, activation='linear',
kernel_regularizer=l2(l=L2_LAMBDA))
def call(self, x_in, training=False):
"""Forward pass."""
y_pred = self.fc1(x_in)
return y_pred
def sample(self, input_shape):
x_in = Input(shape=input_shape)
return Model(inputs=x_in, outputs=self.call(x_in)).summary()
# Initialize the model
model = LinearRegressionL2Regularization(hidden_dim=HIDDEN_DIM)
model.sample(input_shape=(INPUT_DIM,))
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=MeanSquaredError(),
metrics=[MeanAbsolutePercentageError()])
# Training
model.fit(x=standardized_X_train,
y=standardized_y_train,
validation_data=(standardized_X_val, standardized_y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
shuffle=SHUFFLE,
verbose=1)
# Predictions
pred_train = model.predict(standardized_X_train)
pred_test = model.predict(standardized_X_test)
# Train and test MSE
train_mse = np.mean((standardized_y_train - pred_train) ** 2)
test_mse = np.mean((standardized_y_test - pred_test) ** 2)
print (f"train_MSE: {train_mse:.2f}, test_MSE: {test_mse:.2f}")
# Unstandardize coefficients (proof is in the `From Scratch` section above)
W = model.layers[0].get_weights()[0][0][0]
b = model.layers[0].get_weights()[1][0]
W_unscaled = W * (y_scaler.scale_/X_scaler.scale_)
b_unscaled = b * y_scaler.scale_ + y_scaler.mean_ - np.sum(W_unscaled*X_scaler.mean_)
print ("[actual] y = 3.5X + noise")
print (f"[model] y_hat = {W_unscaled[0]:.1f}X + {b_unscaled[0]:.1f}")
###Output
[actual] y = 3.5X + noise
[model] y_hat = 3.4X + 9.5
###Markdown
Regularization didn't help much with this specific example because our data is generated from a perfect linear equation but for large realistic data, regularization can help our model generalize well. Categorical variables In our example, the feature was a continuous variable but what if we also have features that are categorical? One option is to treat the categorical variables as one-hot encoded variables. This is very easy to do with Pandas and once you create the dummy variables, you can use the same steps as above to train your linear model.
###Code
# Create data with categorical features
cat_data = pd.DataFrame(['a', 'b', 'c', 'a'], columns=['favorite_letter'])
cat_data.head()
dummy_cat_data = pd.get_dummies(cat_data)
dummy_cat_data.head()
###Output
_____no_output_____
###Markdown
Linear RegressionIn this lesson we will learn about linear regression. We will understand the basic math behind it, implement it in Python. and then look at ways of interpreting the linear model. View on practicalAI Run in Google Colab View code on GitHub Overview Our goal is to learn a linear model $\hat{y}$ that models $y$ given $X$. $\hat{y} = XW + b$* $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$ ($N$ is the number of samples)* $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features)* $W$ = weights | $\in \mathbb{R}^{DX1}$ * $b$ = bias | $\in \mathbb{R}^{1}$ * **Objective:** Use inputs $X$ to predict the output $\hat{y}$ using a linear model. The model will be a line of best fit that minimizes the distance between the predicted (model's output) and target (ground truth) values. Training data $(X, y)$ is used to train the model and learn the weights $W$ using gradient descent.* **Advantages:** * Computationally simple. * Highly interpretable. * Can account for continuous and categorical features.* **Disadvantages:** * The model will perform well only when the data is linearly separable (for classification). * Usually not used for classification and only for regression.* **Miscellaneous:** You can also use linear regression for binary classification tasks where if the predicted continuous value is above a threshold, it belongs to a certain class. But we will cover better techniques for classification in future lessons and will focus on linear regression for continuous regression tasks only. Training 1. Randomly initialize the model's weights $W$ (we'll cover more effective initalization strategies in future lessons).2. Feed inputs $X$ into the model to receive the predictions $\hat{y}$. * $\hat{y} = XW + b$3. Compare the predictions $\hat{y}$ with the actual target values $y$ using the objective (cost) function to determine the loss $J$. A common objective function for linear regression is mean squarred error (MSE). This function calculates the difference between the predicted and target values and squares it. * $J(\theta) = MSE = \frac{1}{N} \sum_i (y_i - \hat{y}_i)^2 $ * ${y}$ = ground truth | $\in \mathbb{R}^{NX1}$ * $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$4. Calculate the gradient of loss $J(\theta)$ w.r.t to the model weights. * $J(\theta) = \frac{1}{N} \sum_i (y_i - \hat{y}_i)^2 = \frac{1}{N}\sum_i (y_i - X_iW)^2 $ * $\frac{\partial{J}}{\partial{W}} = -\frac{2}{N} \sum_i (y_i - X_iW) X_i = -\frac{2}{N} \sum_i (y_i - \hat{y}_i) X_i$ * $\frac{\partial{J}}{\partial{b}} = -\frac{2}{N} \sum_i (y_i - X_iW)1 = -\frac{2}{N} \sum_i (y_i - \hat{y}_i)1$5. Update the weights $W$ using a small learning rate $\alpha$. The simplified intuition is that the gradient tells you the direction for how to increase something so subtracting it will help you go the other way since we want to decrease loss $J(\theta)$: * $W = W - \alpha\frac{\partial{J}}{\partial{W}}$ * $b = b - \alpha\frac{\partial{J}}{\partial{b}}$6. Repeat steps 2 - 5 to minimize the loss and train the model. Set up
###Code
# Use TensorFlow 2.x
%tensorflow_version 2.x
import os
import numpy as np
import tensorflow as tf
# Arguments
SEED = 1234
SHUFFLE = True
NUM_SAMPLES = 50
TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15
NUM_EPOCHS = 100
BATCH_SIZE = 10
INPUT_DIM = 1
HIDDEN_DIM = 1
LEARNING_RATE = 1e-1
# Set seed for reproducability
np.random.seed(SEED)
tf.random.set_seed(SEED)
###Output
_____no_output_____
###Markdown
Data We're going to create some simple dummy data to apply linear regression on.
###Code
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Components We're going to create some dummy data to train our linear regression model on. We're going to create roughly linear data (`y = 3.5X + 10`) with some random noise so the points don't all align in a straight line. Our goal is to have the model converge to a similar linear equation (there will be slight variance since we added some noise).
###Code
# Generate synthetic data
def generate_data(num_samples):
"""Generate dummy data for linear regression."""
X = np.array(range(num_samples))
random_noise = np.random.uniform(-10,20,size=num_samples)
y = 3.5*X + random_noise # add some noise
return X, y
###Output
_____no_output_____
###Markdown
Operations Now let's load the data onto a dataframe and visualize it.
###Code
# Generate random (linear) data
X, y = generate_data(num_samples=NUM_SAMPLES)
data = np.vstack([X, y]).T
df = pd.DataFrame(data, columns=['X', 'y'])
X = df[['X']].values
y = df[['y']].values
df.head()
# Scatter plot
plt.title("Generated data")
plt.scatter(x=df['X'], y=df['y'])
plt.show()
###Output
_____no_output_____
###Markdown
Split data
###Code
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Components Since our task is a regression task, we will randomly split our dataset into **three** sets: train, validation and test data splits.* train: used to train our model.* val : used to validate our model's performance during training.* test: used to do an evaluation of our fully trained model. Splitting the data for classification tasks are a bit different in that we want similar class distributions in each data split. We'll see this in action in the next lesson.
###Code
def train_val_test_split(X, y, val_size, test_size, shuffle):
"""Split data into train/val/test datasets.
"""
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_size, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=val_size, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
###Output
_____no_output_____
###Markdown
Operations
###Code
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X, y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_test: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"X_train[0]: {X_train[0]}")
print (f"y_train[0]: {y_train[0]}")
###Output
X_train: (35, 1), y_train: (35, 1)
X_val: (7, 1), y_test: (7, 1)
X_test: (8, 1), y_test: (8, 1)
X_train[0]: [12.]
y_train[0]: [52.50388806]
###Markdown
Standardize data We need to standardize our data (zero mean and unit variance) in order to optimize quickly. $z = \frac{x_i - \mu}{\sigma}$* $z$ = standardized value* $x_i$ = inputs* $\mu$ = mean* $\sigma$ = standard deviation
###Code
from sklearn.preprocessing import StandardScaler
# Standardize the data (mean=0, std=1) using training data
X_scaler = StandardScaler().fit(X_train)
y_scaler = StandardScaler().fit(y_train)
# Apply scaler on training and test data
standardized_X_train = X_scaler.transform(X_train)
standardized_y_train = y_scaler.transform(y_train).ravel()
standardized_X_val = X_scaler.transform(X_val)
standardized_y_val = y_scaler.transform(y_val).ravel()
standardized_X_test = X_scaler.transform(X_test)
standardized_y_test = y_scaler.transform(y_test).ravel()
# Check (means should be ~0 and std should be ~1)
print (f"standardized_X_train: mean: {np.mean(standardized_X_train, axis=0)[0]}, std: {np.std(standardized_X_train, axis=0)[0]}")
print (f"standardized_y_train: mean: {np.mean(standardized_y_train, axis=0)}, std: {np.std(standardized_y_train, axis=0)}")
print (f"standardized_X_val: mean: {np.mean(standardized_X_val, axis=0)[0]}, std: {np.std(standardized_X_val, axis=0)[0]}")
print (f"standardized_y_val: mean: {np.mean(standardized_y_val, axis=0)}, std: {np.std(standardized_y_val, axis=0)}")
print (f"standardized_X_test: mean: {np.mean(standardized_X_test, axis=0)[0]}, std: {np.std(standardized_X_test, axis=0)[0]}")
print (f"standardized_y_test: mean: {np.mean(standardized_y_test, axis=0)}, std: {np.std(standardized_y_test, axis=0)}")
###Output
standardized_X_train: mean: -1.2688263138573217e-16, std: 1.0
standardized_y_train: mean: 8.961085841617335e-17, std: 1.0
standardized_X_val: mean: 0.455275148918323, std: 0.8941749819083381
standardized_y_val: mean: 0.48468545395300805, std: 0.9785039908565315
standardized_X_test: mean: -0.44282621906508784, std: 1.0652341142978086
standardized_y_test: mean: -0.4940327711822252, std: 1.0650188983159736
###Markdown
From scratchBefore we use TensorFlow 2.0 + Keras we will implent linear regression from scratch using NumPy so we can:1. Absorb the fundamental concepts by implementing from scratch2. Appreciate the level of abstraction TensorFlow providesIt's normal to find the math and code in this section slightly complex. You can still read each of the steps to build intuition for when we implement this using TensorFlow + Keras.
###Code
standardized_y_train = standardized_y_train.reshape(-1, 1)
print (f"X: {standardized_X_train.shape}")
print (f"y: {standardized_y_train.shape}")
###Output
X: (35, 1)
y: (35, 1)
###Markdown
Our goal is to learn a linear model $\hat{y}$ that models $y$ given $X$. $\hat{y} = XW + b$* $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$ ($N$ is the number of samples)* $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features)* $W$ = weights | $\in \mathbb{R}^{DX1}$ * $b$ = bias | $\in \mathbb{R}^{1}$ 1. Randomly initialize the model's weights $W$.
###Code
# Initialize random weights
W = 0.01 * np.random.randn(INPUT_DIM, 1)
b = np.zeros((1, 1))
print (f"W: {W.shape}")
print (f"b: {b.shape}")
###Output
W: (1, 1)
b: (1, 1)
###Markdown
2. Feed inputs $X$ into the model to receive the predictions $\hat{y}$. * $\hat{y} = XW + b$
###Code
# Forward pass [NX1] · [1X1] = [NX1]
y_hat = np.dot(standardized_X_train, W) + b
print (f"y_hat: {y_hat.shape}")
###Output
y_hat: (35, 1)
###Markdown
3. Compare the predictions $\hat{y}$ with the actual target values $y$ using the objective (cost) function to determine the loss $J$. A common objective function for linear regression is mean squarred error (MSE). This function calculates the difference between the predicted and target values and squares it. * $J(\theta) = MSE = \frac{1}{N} \sum_{i-1}^{N} (y_i - \hat{y}_i)^2 $ * ${y}$ = ground truth | $\in \mathbb{R}^{NX1}$ * $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$
###Code
# Loss
N = len(standardized_y_train)
loss = (1/N) * np.sum((standardized_y_train - y_hat)**2)
print (f"loss: {loss:.2f}")
###Output
loss: 1.02
###Markdown
4. Calculate the gradient of loss $J(\theta)$ w.r.t to the model weights. * $J(\theta) = \frac{1}{N} \sum_i (y_i - \hat{y}_i)^2 = \frac{1}{N}\sum_i (y_i - X_iW)^2 $ * $\frac{\partial{J}}{\partial{W}} = -\frac{2}{N} \sum_i (y_i - X_iW) X_i = -\frac{2}{N} \sum_i (y_i - \hat{y}_i) X_i$ * $\frac{\partial{J}}{\partial{W}} = -\frac{2}{N} \sum_i (y_i - X_iW)1 = -\frac{2}{N} \sum_i (y_i - \hat{y}_i)1$
###Code
# Backpropagation
dW = -(2/N) * np.sum((standardized_y_train - y_hat) * standardized_X_train)
db = -(2/N) * np.sum((standardized_y_train - y_hat) * 1)
###Output
_____no_output_____
###Markdown
5. Update the weights $W$ using a small learning rate $\alpha$. The simplified intuition is that the gradient tells you the direction for how to increase something so subtracting it will help you go the other way since we want to decrease loss $J(\theta)$: * $W = W - \alpha\frac{\partial{J}}{\partial{W}}$ * $b = b - \alpha\frac{\partial{J}}{\partial{b}}$
###Code
# Update weights
W += -LEARNING_RATE * dW
b += -LEARNING_RATE * db
###Output
_____no_output_____
###Markdown
6. Repeat steps 2 - 5 to minimize the loss and train the model.
###Code
# Initialize random weights
W = 0.01 * np.random.randn(INPUT_DIM, 1)
b = np.zeros((1, ))
# Training loop
for epoch_num in range(NUM_EPOCHS):
# Forward pass [NX1] · [1X1] = [NX1]
y_hat = np.dot(standardized_X_train, W) + b
# Loss
loss = (1/len(standardized_y_train)) * np.sum((standardized_y_train - y_hat)**2)
# show progress
if epoch_num%10 == 0:
print (f"Epoch: {epoch_num}, loss: {loss:.3f}")
# Backpropagation
dW = -(2/N) * np.sum((standardized_y_train - y_hat) * standardized_X_train)
db = -(2/N) * np.sum((standardized_y_train - y_hat) * 1)
# Update weights
W += -LEARNING_RATE * dW
b += -LEARNING_RATE * db
# Predictions
pred_train = W*standardized_X_train + b
pred_test = W*standardized_X_test + b
# Train and test MSE
train_mse = np.mean((standardized_y_train - pred_train) ** 2)
test_mse = np.mean((standardized_y_test - pred_test) ** 2)
print (f"train_MSE: {train_mse:.2f}, test_MSE: {test_mse:.2f}")
# Figure size
plt.figure(figsize=(15,5))
# Plot train data
plt.subplot(1, 2, 1)
plt.title("Train")
plt.scatter(standardized_X_train, standardized_y_train, label='y_train')
plt.plot(standardized_X_train, pred_train, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Plot test data
plt.subplot(1, 2, 2)
plt.title("Test")
plt.scatter(standardized_X_test, standardized_y_test, label='y_test')
plt.plot(standardized_X_test, pred_test, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Show plots
plt.show()
###Output
_____no_output_____
###Markdown
Since we standardized our inputs and outputs, our weights were fit to those standardized values. So we need to unstandardize our weights so we can compare it to our true weight (3.5).Note that both X and y were standardized.$\hat{y}_{scaled} = b_{scaled} + \sum_{j=1}^{k}W_{{scaled}_j}x_{{scaled}_j}$* $y_{scaled} = \frac{\hat{y} - \bar{y}}{\sigma_y}$* $x_{scaled} = \frac{x_j - \bar{x}_j}{\sigma_j}$$\frac{\hat{y} - \bar{y}}{\sigma_y} = b_{scaled} + \sum_{j=1}^{k}W_{{scaled}_j}\frac{x_j - \bar{x}_j}{\sigma_j}$$ \hat{y}_{scaled} = \frac{\hat{y}_{unscaled} - \bar{y}}{\sigma_y} = {b_{scaled}} + \sum_{j=1}^{k} {W}_{{scaled}_j} (\frac{x_j - \bar{x}_j}{\sigma_j}) $$\hat{y}_{unscaled} = b_{scaled}\sigma_y + \bar{y} - \sum_{j=1}^{k} {W}_{{scaled}_j}(\frac{\sigma_y}{\sigma_j})\bar{x}_j + \sum_{j=1}^{k}{W}_{{scaled}_j}(\frac{\sigma_y}{\sigma_j})x_j $In the expression above, we can see the expression $\hat{y}_{unscaled} = W_{unscaled}x + b_{unscaled} $ where* $W_{unscaled} = \sum_{j=1}^{k}{W}_j(\frac{\sigma_y}{\sigma_j}) $* $b_{unscaled} = b_{scaled}\sigma_y + \bar{y} - \sum_{j=1}^{k} {W}_j(\frac{\sigma_y}{\sigma_j})\bar{x}_j$
###Code
# Unscaled weights
W_unscaled = W * (y_scaler.scale_/X_scaler.scale_)
b_unscaled = b * y_scaler.scale_ + y_scaler.mean_ - np.sum(W_unscaled*X_scaler.mean_)
print ("[actual] y = 3.5X + noise")
print (f"[model] y_hat = {W_unscaled[0][0]:.1f}X + {b_unscaled[0]:.1f}")
###Output
[actual] y = 3.5X + noise
[model] y_hat = 3.4X + 9.5
###Markdown
Now let's implement linear regression with TensorFlow + Keras. TensorFlow + Keras We will be using [Dense layers](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense) in our MLP implementation. The layer applies an activation function on the dot product of the layer's inputs and its weights.$ z = \text{activation}(XW)$
###Code
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
x = Input(shape=(INPUT_DIM,))
fc = Dense(units=HIDDEN_DIM, activation='linear')
z = fc(x)
W, b = fc.weights
print (f"z {z.shape} = x {x.shape} · W {W.shape} + b {b.shape}")
###Output
z (None, 1) = x (None, 1) · W (1, 1) + b (1,)
###Markdown
Model
###Code
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.losses import MeanSquaredError
from tensorflow.keras.metrics import MeanAbsolutePercentageError
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
# Linear regression
class LinearRegression(Model):
def __init__(self, hidden_dim):
super(LinearRegression, self).__init__()
self.fc1 = Dense(units=hidden_dim, activation='linear')
def call(self, x_in, training=False):
"""Forward pass."""
y_pred = self.fc1(x_in)
return y_pred
def sample(self, input_shape):
x_in = Input(shape=input_shape)
return Model(inputs=x_in, outputs=self.call(x_in)).summary()
# Initialize the model
model = LinearRegression(hidden_dim=HIDDEN_DIM)
model.sample(input_shape=(INPUT_DIM,))
###Output
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 1)] 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 2
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________
###Markdown
When we implemented linear regression from scratch, we used batch gradient descent to update our weights. But there are actually many different [gradient descent optimization algorithms](https://ruder.io/optimizing-gradient-descent/) to choose from and it depends on the situation. However, the [ADAM optimizer](https://ruder.io/optimizing-gradient-descent/index.htmlgradientdescentoptimizationalgorithms/adam) has become a standard algorithm for most cases.
###Code
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=MeanSquaredError(),
metrics=[MeanAbsolutePercentageError()])
###Output
_____no_output_____
###Markdown
Here are the full list of options for [optimizer](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers), [loss](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses) and [metrics](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/metrics). Training When we implemented linear regression from scratch, we used batch gradient descent to update our weights. This means that we calculated the gradients using the entire training dataset. We also could've updated our weights using stochastic gradient descent (SGD) where we pass in one training example at a time. The current standard is mini-batch gradient descent, which strikes a balance between batch and stochastic GD, where we update the weights using a mini-batch of n (`BATCH_SIZE`) samples.
###Code
# Training
model.fit(x=standardized_X_train,
y=standardized_y_train,
validation_data=(standardized_X_val, standardized_y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
shuffle=False,
verbose=1)
###Output
Train on 35 samples, validate on 7 samples
Epoch 1/100
35/35 [==============================] - 1s 29ms/sample - loss: 2.2303 - mean_absolute_percentage_error: 189.2383 - val_loss: 1.8631 - val_mean_absolute_percentage_error: 110.1029
Epoch 2/100
35/35 [==============================] - 0s 604us/sample - loss: 1.1683 - mean_absolute_percentage_error: 129.5676 - val_loss: 0.8827 - val_mean_absolute_percentage_error: 85.6471
Epoch 3/100
35/35 [==============================] - 0s 650us/sample - loss: 0.5139 - mean_absolute_percentage_error: 72.2302 - val_loss: 0.3088 - val_mean_absolute_percentage_error: 63.4733
Epoch 4/100
35/35 [==============================] - 0s 675us/sample - loss: 0.1775 - mean_absolute_percentage_error: 58.3600 - val_loss: 0.0744 - val_mean_absolute_percentage_error: 43.0542
Epoch 5/100
35/35 [==============================] - 0s 635us/sample - loss: 0.0590 - mean_absolute_percentage_error: 55.3533 - val_loss: 0.0241 - val_mean_absolute_percentage_error: 29.1076
Epoch 6/100
35/35 [==============================] - 0s 724us/sample - loss: 0.0501 - mean_absolute_percentage_error: 52.8501 - val_loss: 0.0298 - val_mean_absolute_percentage_error: 21.7504
Epoch 7/100
35/35 [==============================] - 0s 718us/sample - loss: 0.0740 - mean_absolute_percentage_error: 54.2517 - val_loss: 0.0345 - val_mean_absolute_percentage_error: 17.8960
Epoch 8/100
35/35 [==============================] - 0s 685us/sample - loss: 0.0897 - mean_absolute_percentage_error: 51.1155 - val_loss: 0.0292 - val_mean_absolute_percentage_error: 17.8972
Epoch 9/100
35/35 [==============================] - 0s 693us/sample - loss: 0.0826 - mean_absolute_percentage_error: 46.4416 - val_loss: 0.0204 - val_mean_absolute_percentage_error: 16.6245
Epoch 10/100
35/35 [==============================] - 0s 745us/sample - loss: 0.0588 - mean_absolute_percentage_error: 41.9437 - val_loss: 0.0161 - val_mean_absolute_percentage_error: 16.1506
Epoch 11/100
35/35 [==============================] - 0s 735us/sample - loss: 0.0367 - mean_absolute_percentage_error: 38.9170 - val_loss: 0.0205 - val_mean_absolute_percentage_error: 22.4666
Epoch 12/100
35/35 [==============================] - 0s 719us/sample - loss: 0.0285 - mean_absolute_percentage_error: 38.1202 - val_loss: 0.0309 - val_mean_absolute_percentage_error: 28.5176
Epoch 13/100
35/35 [==============================] - 0s 727us/sample - loss: 0.0314 - mean_absolute_percentage_error: 38.2243 - val_loss: 0.0406 - val_mean_absolute_percentage_error: 31.7614
Epoch 14/100
35/35 [==============================] - 0s 702us/sample - loss: 0.0355 - mean_absolute_percentage_error: 37.9354 - val_loss: 0.0450 - val_mean_absolute_percentage_error: 32.1937
Epoch 15/100
35/35 [==============================] - 0s 699us/sample - loss: 0.0358 - mean_absolute_percentage_error: 36.5390 - val_loss: 0.0433 - val_mean_absolute_percentage_error: 30.6931
Epoch 16/100
35/35 [==============================] - 0s 631us/sample - loss: 0.0333 - mean_absolute_percentage_error: 35.0792 - val_loss: 0.0375 - val_mean_absolute_percentage_error: 28.4353
Epoch 17/100
35/35 [==============================] - 0s 626us/sample - loss: 0.0305 - mean_absolute_percentage_error: 34.9512 - val_loss: 0.0306 - val_mean_absolute_percentage_error: 26.3657
Epoch 18/100
35/35 [==============================] - 0s 634us/sample - loss: 0.0289 - mean_absolute_percentage_error: 35.9414 - val_loss: 0.0253 - val_mean_absolute_percentage_error: 24.9581
Epoch 19/100
35/35 [==============================] - 0s 659us/sample - loss: 0.0284 - mean_absolute_percentage_error: 37.1964 - val_loss: 0.0226 - val_mean_absolute_percentage_error: 24.2765
Epoch 20/100
35/35 [==============================] - 0s 682us/sample - loss: 0.0283 - mean_absolute_percentage_error: 37.8857 - val_loss: 0.0221 - val_mean_absolute_percentage_error: 24.1799
Epoch 21/100
35/35 [==============================] - 0s 725us/sample - loss: 0.0283 - mean_absolute_percentage_error: 37.7560 - val_loss: 0.0233 - val_mean_absolute_percentage_error: 24.4904
Epoch 22/100
35/35 [==============================] - 0s 754us/sample - loss: 0.0282 - mean_absolute_percentage_error: 37.1553 - val_loss: 0.0253 - val_mean_absolute_percentage_error: 25.0431
Epoch 23/100
35/35 [==============================] - 0s 646us/sample - loss: 0.0284 - mean_absolute_percentage_error: 36.6211 - val_loss: 0.0271 - val_mean_absolute_percentage_error: 25.6636
Epoch 24/100
35/35 [==============================] - 0s 604us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.4372 - val_loss: 0.0282 - val_mean_absolute_percentage_error: 26.1652
Epoch 25/100
35/35 [==============================] - 0s 721us/sample - loss: 0.0288 - mean_absolute_percentage_error: 36.5204 - val_loss: 0.0284 - val_mean_absolute_percentage_error: 26.4022
Epoch 26/100
35/35 [==============================] - 0s 824us/sample - loss: 0.0288 - mean_absolute_percentage_error: 36.6643 - val_loss: 0.0280 - val_mean_absolute_percentage_error: 26.3404
Epoch 27/100
35/35 [==============================] - 0s 869us/sample - loss: 0.0287 - mean_absolute_percentage_error: 36.7185 - val_loss: 0.0274 - val_mean_absolute_percentage_error: 26.0752
Epoch 28/100
35/35 [==============================] - 0s 1ms/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6859 - val_loss: 0.0269 - val_mean_absolute_percentage_error: 25.7720
Epoch 29/100
35/35 [==============================] - 0s 715us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.6579 - val_loss: 0.0265 - val_mean_absolute_percentage_error: 25.5705
Epoch 30/100
35/35 [==============================] - 0s 659us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.6927 - val_loss: 0.0263 - val_mean_absolute_percentage_error: 25.5202
Epoch 31/100
35/35 [==============================] - 0s 679us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7627 - val_loss: 0.0263 - val_mean_absolute_percentage_error: 25.5831
Epoch 32/100
35/35 [==============================] - 0s 677us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.8006 - val_loss: 0.0265 - val_mean_absolute_percentage_error: 25.6859
Epoch 33/100
35/35 [==============================] - 0s 773us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7741 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7713
Epoch 34/100
35/35 [==============================] - 0s 788us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7127 - val_loss: 0.0270 - val_mean_absolute_percentage_error: 25.8174
Epoch 35/100
35/35 [==============================] - 0s 727us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6695 - val_loss: 0.0270 - val_mean_absolute_percentage_error: 25.8273
Epoch 36/100
35/35 [==============================] - 0s 678us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6699 - val_loss: 0.0270 - val_mean_absolute_percentage_error: 25.8115
Epoch 37/100
35/35 [==============================] - 0s 655us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6975 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7807
Epoch 38/100
35/35 [==============================] - 0s 688us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7206 - val_loss: 0.0267 - val_mean_absolute_percentage_error: 25.7473
Epoch 39/100
35/35 [==============================] - 0s 669us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7241 - val_loss: 0.0267 - val_mean_absolute_percentage_error: 25.7243
Epoch 40/100
35/35 [==============================] - 0s 811us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7156 - val_loss: 0.0267 - val_mean_absolute_percentage_error: 25.7198
Epoch 41/100
35/35 [==============================] - 0s 761us/sample - loss: 0.0285 - mean_absolute_percentage_error: 36.7096 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7316
Epoch 42/100
35/35 [==============================] - 0s 639us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7101 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7490
Epoch 43/100
35/35 [==============================] - 0s 689us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7116 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7608
Epoch 44/100
35/35 [==============================] - 0s 710us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7089 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7625
Epoch 45/100
35/35 [==============================] - 0s 648us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7040 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7570
Epoch 46/100
35/35 [==============================] - 0s 667us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7018 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7497
Epoch 47/100
35/35 [==============================] - 0s 676us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7040 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7444
Epoch 48/100
35/35 [==============================] - 0s 647us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7073 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7419
Epoch 49/100
35/35 [==============================] - 0s 648us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7081 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7416
Epoch 50/100
35/35 [==============================] - 0s 724us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7062 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7428
Epoch 51/100
35/35 [==============================] - 0s 748us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7037 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7443
Epoch 52/100
35/35 [==============================] - 0s 832us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7026 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7454
Epoch 53/100
35/35 [==============================] - 0s 761us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7026 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7450
Epoch 54/100
35/35 [==============================] - 0s 683us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7025 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7435
Epoch 55/100
35/35 [==============================] - 0s 678us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7019 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7415
Epoch 56/100
35/35 [==============================] - 0s 708us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7013 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7400
Epoch 57/100
35/35 [==============================] - 0s 679us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7010 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7394
Epoch 58/100
35/35 [==============================] - 0s 641us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7008 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7392
Epoch 59/100
35/35 [==============================] - 0s 701us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.7003 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7390
Epoch 60/100
35/35 [==============================] - 0s 713us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6995 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7387
Epoch 61/100
35/35 [==============================] - 0s 747us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6988 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7381
Epoch 62/100
35/35 [==============================] - 0s 685us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6983 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7374
Epoch 63/100
35/35 [==============================] - 0s 639us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6980 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7366
Epoch 64/100
35/35 [==============================] - 0s 701us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6976 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7358
Epoch 65/100
35/35 [==============================] - 0s 722us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6971 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7351
Epoch 66/100
35/35 [==============================] - 0s 689us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6965 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7346
Epoch 67/100
35/35 [==============================] - 0s 643us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6961 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7341
Epoch 68/100
35/35 [==============================] - 0s 673us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6956 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7336
Epoch 69/100
35/35 [==============================] - 0s 644us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6951 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7330
Epoch 70/100
35/35 [==============================] - 0s 705us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6946 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7323
Epoch 71/100
35/35 [==============================] - 0s 673us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6941 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7317
Epoch 72/100
35/35 [==============================] - 0s 680us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6937 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7311
Epoch 73/100
35/35 [==============================] - 0s 626us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6932 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7305
Epoch 74/100
35/35 [==============================] - 0s 801us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6927 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7300
Epoch 75/100
35/35 [==============================] - 0s 627us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6922 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7294
Epoch 76/100
35/35 [==============================] - 0s 658us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6918 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7288
Epoch 77/100
35/35 [==============================] - 0s 762us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6913 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7283
Epoch 78/100
35/35 [==============================] - 0s 757us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6908 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7277
Epoch 79/100
35/35 [==============================] - 0s 739us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6904 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7271
Epoch 80/100
35/35 [==============================] - 0s 764us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6899 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7266
Epoch 81/100
35/35 [==============================] - 0s 716us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6894 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7260
Epoch 82/100
35/35 [==============================] - 0s 757us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6890 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7255
Epoch 83/100
35/35 [==============================] - 0s 836us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6885 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7249
Epoch 84/100
35/35 [==============================] - 0s 754us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6881 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7244
Epoch 85/100
35/35 [==============================] - 0s 765us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6876 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7238
Epoch 86/100
35/35 [==============================] - 0s 741us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6872 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7233
Epoch 87/100
35/35 [==============================] - 0s 686us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6867 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7227
Epoch 88/100
35/35 [==============================] - 0s 828us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6862 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7222
Epoch 89/100
35/35 [==============================] - 0s 730us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6858 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7217
Epoch 90/100
35/35 [==============================] - 0s 711us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6853 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7211
Epoch 91/100
35/35 [==============================] - 0s 755us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6849 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7206
Epoch 92/100
35/35 [==============================] - 0s 799us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6844 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7201
Epoch 93/100
35/35 [==============================] - 0s 914us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6840 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7196
Epoch 94/100
35/35 [==============================] - 0s 733us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6836 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7190
Epoch 95/100
35/35 [==============================] - 0s 768us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6831 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7185
Epoch 96/100
35/35 [==============================] - 0s 755us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6827 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7180
Epoch 97/100
35/35 [==============================] - 0s 750us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6822 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7175
Epoch 98/100
35/35 [==============================] - 0s 1ms/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6818 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7170
Epoch 99/100
35/35 [==============================] - 0s 719us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6813 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7165
Epoch 100/100
35/35 [==============================] - 0s 779us/sample - loss: 0.0286 - mean_absolute_percentage_error: 36.6809 - val_loss: 0.0268 - val_mean_absolute_percentage_error: 25.7160
###Markdown
Evaluation There are several evaluation techniques to see how well our model performed. A common one for linear regression is mean squarred error.
###Code
# Predictions
pred_train = model.predict(standardized_X_train)
pred_test = model.predict(standardized_X_test)
# Train and test MSE
train_mse = np.mean((standardized_y_train - pred_train) ** 2)
test_mse = np.mean((standardized_y_test - pred_test) ** 2)
print (f"train_MSE: {train_mse:.2f}, test_MSE: {test_mse:.2f}")
###Output
train_MSE: 0.03, test_MSE: 2.21
###Markdown
Since we only have one feature, it's easy to visually inspect the model.
###Code
# Figure size
plt.figure(figsize=(15,5))
# Plot train data
plt.subplot(1, 2, 1)
plt.title("Train")
plt.scatter(standardized_X_train, standardized_y_train, label='y_train')
plt.plot(standardized_X_train, pred_train, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Plot test data
plt.subplot(1, 2, 2)
plt.title("Test")
plt.scatter(standardized_X_test, standardized_y_test, label='y_test')
plt.plot(standardized_X_test, pred_test, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Show plots
plt.show()
###Output
_____no_output_____
###Markdown
Inference After training a model, we can use it to predict on new data.
###Code
# Feed in your own inputs
sample_indices = [10, 15, 25]
X_infer = np.array(sample_indices, dtype=np.float32)
standardized_X_infer = X_scaler.transform(X_infer.reshape(-1, 1))
###Output
_____no_output_____
###Markdown
Recall that we need to unstandardize our predictions.$ \hat{y}_{scaled} = \frac{\hat{y} - \mu_{\hat{y}}}{\sigma_{\hat{y}}} $$ \hat{y} = \hat{y}_{scaled} * \sigma_{\hat{y}} + \mu_{\hat{y}} $
###Code
# Unstandardize predictions
pred_infer = model.predict(standardized_X_infer) * np.sqrt(y_scaler.var_) + y_scaler.mean_
for i, index in enumerate(sample_indices):
print (f"{df.iloc[index]['y']:.2f} (actual) → {pred_infer[i][0]:.2f} (predicted)")
###Output
35.73 (actual) → 43.49 (predicted)
59.34 (actual) → 60.03 (predicted)
97.04 (actual) → 93.09 (predicted)
###Markdown
Interpretability Linear regression offers the great advantage of being highly interpretable. Each feature has a coefficient which signifies it's importance/impact on the output variable y. We can interpret our coefficient as follows: by increasing X by 1 unit, we increase y by $W$ (~3.65) units. **Note**: Since we standardized our inputs and outputs for gradient descent, we need to apply an operation to our coefficients and intercept to interpret them. See proof in the `From scratch` section above.
###Code
# Unstandardize coefficients (proof is in the `From Scratch` section above)
W = model.layers[0].get_weights()[0][0][0]
b = model.layers[0].get_weights()[1][0]
W_unscaled = W * (y_scaler.scale_/X_scaler.scale_)
b_unscaled = b * y_scaler.scale_ + y_scaler.mean_ - np.sum(W_unscaled*X_scaler.mean_)
print ("[actual] y = 3.5X + noise")
print (f"[model] y_hat = {W_unscaled[0]:.1f}X + {b_unscaled[0]:.1f}")
###Output
[actual] y = 3.5X + noise
[model] y_hat = 3.3X + 10.4
###Markdown
Regularization Regularization helps decrease overfitting. Below is L2 regularization (ridge regression). There are many forms of regularization but they all work to reduce overfitting in our models. With L2 regularization, we are penalizing the weights with large magnitudes by decaying them. Having certain weights with high magnitudes will lead to preferential bias with the inputs and we want the model to work with all the inputs and not just a select few. There are also other types of regularization like L1 (lasso regression) which is useful for creating sparse models where some feature cofficients are zeroed out, or elastic which combines L1 and L2 penalties. **Note**: Regularization is not just for linear regression. You can use it to regularize any model's weights including the ones we will look at in future lessons. $ J(\theta) = = \frac{1}{2}\sum_{i}(X_iW - y_i)^2 + \frac{\lambda}{2}W^TW$$ \frac{\partial{J}}{\partial{W}} = X (\hat{y} - y) + \lambda W $$W = W- \alpha\frac{\partial{J}}{\partial{W}}$* $\lambda$ is the regularzation coefficient
###Code
from tensorflow.keras.regularizers import l2
L2_LAMBDA = 1e-2
# Linear model with L2 regularization
class LinearRegressionL2Regularization(Model):
def __init__(self, hidden_dim):
super(LinearRegressionL2Regularization, self).__init__()
self.fc1 = Dense(units=hidden_dim, activation='linear',
kernel_regularizer=l2(l=L2_LAMBDA))
def call(self, x_in, training=False):
"""Forward pass."""
y_pred = self.fc1(x_in)
return y_pred
def sample(self, input_shape):
x_in = Input(shape=input_shape)
return Model(inputs=x_in, outputs=self.call(x_in)).summary()
# Initialize the model
model = LinearRegressionL2Regularization(hidden_dim=HIDDEN_DIM)
model.sample(input_shape=(INPUT_DIM,))
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=MeanSquaredError(),
metrics=[MeanAbsolutePercentageError()])
# Training
model.fit(x=standardized_X_train,
y=standardized_y_train,
validation_data=(standardized_X_val, standardized_y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
shuffle=SHUFFLE,
verbose=1)
# Predictions
pred_train = model.predict(standardized_X_train)
pred_test = model.predict(standardized_X_test)
# Train and test MSE
train_mse = np.mean((standardized_y_train - pred_train) ** 2)
test_mse = np.mean((standardized_y_test - pred_test) ** 2)
print (f"train_MSE: {train_mse:.2f}, test_MSE: {test_mse:.2f}")
# Unstandardize coefficients (proof is in the `From Scratch` section above)
W = model.layers[0].get_weights()[0][0][0]
b = model.layers[0].get_weights()[1][0]
W_unscaled = W * (y_scaler.scale_/X_scaler.scale_)
b_unscaled = b * y_scaler.scale_ + y_scaler.mean_ - np.sum(W_unscaled*X_scaler.mean_)
print ("[actual] y = 3.5X + noise")
print (f"[model] y_hat = {W_unscaled[0]:.1f}X + {b_unscaled[0]:.1f}")
###Output
[actual] y = 3.5X + noise
[model] y_hat = 3.4X + 9.5
###Markdown
Regularization didn't help much with this specific example because our data is generated from a perfect linear equation but for large realistic data, regularization can help our model generalize well. Categorical variables In our example, the feature was a continuous variable but what if we also have features that are categorical? One option is to treat the categorical variables as one-hot encoded variables. This is very easy to do with Pandas and once you create the dummy variables, you can use the same steps as above to train your linear model.
###Code
# Create data with categorical features
cat_data = pd.DataFrame(['a', 'b', 'c', 'a'], columns=['favorite_letter'])
cat_data.head()
dummy_cat_data = pd.get_dummies(cat_data)
dummy_cat_data.head()
###Output
_____no_output_____ |
tutorials/W3D1_BayesianDecisions/student/W3D1_Outro.ipynb | ###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"eOEyDjhlHwA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
ReflectionsDon't forget to complete your reflections and content checks! Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/cvbu7/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
[](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D1_BayesianDecisions/student/W3D1_Outro.ipynb) Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"eOEyDjhlHwA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
ReflectionsDon't forget to complete your reflections and content checks! Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/cvbu7/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV16b4y167Xb", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"eOEyDjhlHwA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Daily surveyDon't forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there isa small delay before you will be redirected to the survey. Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/cvbu7/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV16b4y167Xb", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"eOEyDjhlHwA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Daily surveyDon't forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there is a small delay before you will be redirected to the survey. Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/cvbu7/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____ |
Decorators and Generators.ipynb | ###Markdown
There wasn't much of a chapter on decorators. It was more of a pointer to flask for us to venture on our own. I started wondering if that MIT class had gotten far and if it was still supplying the steady stream of database, python, and web knowledge. I thought I should download it all, but second guessed myself. I need to focus on just one thing at a time to demolish it. Decorators are nothing more than a wrapper that modifies the function of a function, without changing the original function itself.
###Code
def new_decorator(func):
def wrap_func():
print("Code would be here, before executing the func")
func()
print("Code here will execute after the func()")
return wrap_func
def func_needs_decorator():
print("This function is in need of a Decorator")
func_needs_decorator()
func_needs_decorator = new_decorator(func_needs_decorator)
func_needs_decorator()
###Output
Code would be here, before executing the func
This function is in need of a Decorator
Code here will execute after the func()
###Markdown
Flask does this sort of thing all the time with an @something syntax that I think takes the place of some long way around. I'd love to get into that too and all of these rabbit holes go deep. I need to focus on this one to make any sort of headway.
###Code
@new_decorator
def func_needs_decorator():
print("something a little different")
func_needs_decorator()
###Output
Code would be here, before executing the func
something a little different
Code here will execute after the func()
###Markdown
So iterators like range(), map(), and filter() automatically iterate when properly used. It seems like a generator is much easier on our resources than a regular function because it iterates over the values we need to provide them just in time instead of storing and providing the whole list all at once. It can be identified by the term yield in place of return.
###Code
def square_it(n):
for num in range(n):
yield num**2
for i in square_it(20):
print(i)
###Output
0
1
4
9
16
25
36
49
64
81
100
121
144
169
196
225
256
289
324
361
###Markdown
We can take advantage of another term called next that allows us to turn the generator on and off like a switch
###Code
def switch_it():
for i in range(10):
yield i
h = switch_it()
print(next(h))
print(next(h))
###Output
1
###Markdown
Finally, there is a function called iter() that allows us to take any string and make it iterate like a generator.
###Code
s = 'hello'
#Iterate over string
for let in s:
print(let)
next(s)
it_s = iter(s)
next(it_s)
next(it_s)
def gensquares(N):
for num in range(N):
yield num**2
for x in gensquares(10):
print(x)
import random
random.randint(1,10)
def rand_num(low,high,n):
for i in range(n):
yield random.randint(low, high)
for num in rand_num(1,10,12):
print(num)
###Output
7
7
10
9
2
10
5
4
7
9
5
10
|
Chapter06/RF_using_H2o.ipynb | ###Markdown
installed java..since, H2O is a Java-based software for data modeling and general computing.
###Code
# Installing H2o
! pip install h2o
# import all the necessary lib
import h2o
import time
import seaborn as sns
import numpy as np
import pandas as pd
import seaborn
import matplotlib.pyplot as plt
%matplotlib inline
from h2o.estimators.random_forest import H2ORandomForestEstimator
from sklearn import metrics
###Output
_____no_output_____
###Markdown
Dataset InformationThis dataset contains information on default payments, demographic factors, credit data, history of payment, and bill statements of credit card clients in Taiwan from April 2005 to September 2005.ATTRIBUTES DESCRIPTION-There are 25 variables:1. ID: ID of each client2. LIMIT_BAL: Amount of given credit in NT dollars (includes individual and family/supplementary credit3. SEX: Gender (1=male, 2=female)4. EDUCATION: (1=graduate school, 2=university, 3=high school, 4=others, 5=unknown, 6=unknown)5. MARRIAGE: Marital status (1=married, 2=single, 3=others)6. AGE:Age in years7. PAY_0: Repayment status in September, 2005 (-1=pay duly, 1=payment delay for one month, 2=payment delay for two months, ... 8=payment delay for eight months, 9=payment delay for nine months and above)8. PAY_2: Repayment status in August, 2005 (scale same as above)9. PAY_3: Repayment status in July, 2005 (scale same as above)10. PAY_4: Repayment status in June, 2005 (scale same as above)11. PAY_5: Repayment status in May, 2005 (scale same as above)12. PAY_6: Repayment status in April, 2005 (scale same as above)13. BILL_AMT1: Amount of bill statement in September, 2005 (NT dollar)14. BILL_AMT2: Amount of bill statement in August, 2005 (NT dollar)15. BILL_AMT3: Amount of bill statement in July, 2005 (NT dollar)16. BILL_AMT4: Amount of bill statement in June, 2005 (NT dollar)17. BILL_AMT5: Amount of bill statement in May, 2005 (NT dollar)18. BILL_AMT6: Amount of bill statement in April, 2005 (NT dollar)19. PAY_AMT1: Amount of previous payment in September, 2005 (NT dollar)20. PAY_AMT2: Amount of previous payment in August, 2005 (NT dollar)21. PAY_AMT3: Amount of previous payment in July, 2005 (NT dollar)22. PAY_AMT4: Amount of previous payment in June, 2005 (NT dollar)23. PAY_AMT5: Amount of previous payment in May, 2005 (NT dollar)24. PAY_AMT6: Amount of previous payment in April, 2005 (NT dollar)25. default.payment.next.month: Default payment (1=yes, 0=no)
###Code
# Uploading the file to google colab
from google.colab import files
uploaded = files.upload()
#read the data
df_creditcarddata = pd.read_csv("UCI_Credit_Card.csv")
#Initialize H2o
h2o.init()
#changing data to a h2o frame
hf_creditcarddata = h2o.H2OFrame(df_creditcarddata)
hf_creditcarddata.head() #check if loaded properly
#check dimensions of the data
hf_creditcarddata.shape
hf_creditcarddata.describe() #summaryhf_creditcarddata
###Output
Rows:30000
Cols:25
###Markdown
There are 30,000 distinct credit card clients and there are 25 different features.There are no missing data in the whole dataset.1. average value for amount of credit card limit = 1674842. Max number of clients have a marriage status of others - meaning unknown/divorced3. 35yrs is the average age of the clients4. Education status is unkown for max clients5. It says there are 22.1% of the credit card contracts that will default by next month.
###Code
hf_creditcarddata.columns
###Output
_____no_output_____
###Markdown
**EXPLORE DEFAULTING**
###Code
hf_creditcarddata['default.payment.next.month'].table()
hf_creditcarddata['default.payment.next.month'].hist()
df_creditcarddata = df_creditcarddata.drop(["ID"], axis = 1) #not needed for exploration or for evaluation purpose
df_creditcarddata.head()
hf_creditcarddata = hf_creditcarddata.drop(["ID"], axis = 1) #not needed for exploration or for evaluation purpose
hf_creditcarddata.head()
###Output
_____no_output_____
###Markdown
**CHECKING DATA UNBALANCE**
###Code
print((df_creditcarddata["default.payment.next.month"].value_counts())*100/len(df_creditcarddata["default.payment.next.month"]))
#distribution
distribution = hf_creditcarddata[['AGE','BILL_AMT1','BILL_AMT2','BILL_AMT3','BILL_AMT4','BILL_AMT5','BILL_AMT6', 'LIMIT_BAL']]
for col in distribution.columns:
distribution[col].hist()
###Output
/Users/Dippies/anaconda3/lib/python3.6/site-packages/matplotlib/__init__.py:1710: MatplotlibDeprecationWarning: The *left* kwarg to `bar` is deprecated use *x* instead. Support for *left* will be removed in Matplotlib 3.0
return func(ax, *args, **kwargs)
###Markdown
- largest group of credut limit is apparently for amount of 10,00,000.00. Also, high no of defaulters can be seen with credit limit of 200000 or less- 35yrs is the average age for the customers
###Code
# Find number of default by sex
cols = ["default.payment.next.month","SEX"]
S = hf_creditcarddata.group_by(by=cols).count(na ="all")
S.get_frame()
#default vs education
cols = ["default.payment.next.month","EDUCATION"]
E = hf_creditcarddata.group_by(by=cols).count(na ="all")
E.get_frame()
#default vs MARRIAGE
cols = ["default.payment.next.month","MARRIAGE"]
M = hf_creditcarddata.group_by(by=cols).count(na ="all")
M.get_frame()
#default vs limit bal
fig = sns.FacetGrid(df_creditcarddata, hue='default.payment.next.month', aspect=4)
fig.map(sns.kdeplot, 'LIMIT_BAL', shade=True)
oldest = df_creditcarddata['LIMIT_BAL'].max()
fig.set(xlim=(0,oldest))
fig.set(title='Distribution of credit limit balance by default.payment')
fig.add_legend()
features = [f for f in hf_creditcarddata.columns if f not in ['default.payment.next.month']]
i = 0
target_0 = hf_creditcarddata[hf_creditcarddata['default.payment.next.month'] == 0].as_data_frame()
target_1 = hf_creditcarddata[hf_creditcarddata['default.payment.next.month'] == 1].as_data_frame()
sns.set_style('whitegrid')
plt.figure()
fig, ax = plt.subplots(6,4,figsize=(16,24))
for feature in features:
i += 1
plt.subplot(6,4,i)
sns.kdeplot(target_0[feature], bw=0.5,label="Not default")
sns.kdeplot(target_1[feature], bw=0.5,label="Default")
plt.xlabel(feature, fontsize=12)
locs, labels = plt.xticks()
plt.tick_params(axis='both', which='major', labelsize=12)
plt.show();
# source : Kaggle
###Output
_____no_output_____
###Markdown
**GAUGE THE CORRELATION **
###Code
plt.figure(figsize=(10,10))
corr = dataset.cor().as_data_frame()
corr.index = dataset.columns
sns.heatmap(corr, annot = True, cmap='RdYlGn', vmin=-1, vmax=1)
plt.title("Correlation Heatmap", fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
* Amt of Bill statementhere we can observe that the correlation b/w the Amount of bill statement is decreasing with distance between months. Lowest correlations are between Sept-April.* Repayment StatusHere as well, it can be seen that correlation decreases b/w the months* Previous PaymentsSimilarly, there is no correlation b/w amount of previous payments. **CORRELATION WITH THE RESPONSE VARIABLE**
###Code
df_creditcarddata.drop(['default.payment.next.month'], \
axis = 1).corrwith(df_creditcarddata['default.payment.next.month']).\
plot.bar(figsize=(20,10), \
title = 'Correlation with Response variable', \
fontsize = 15, rot = 45, grid = True)
###Output
_____no_output_____
###Markdown
**Last, check the history of - **1. Past payment delays2. Bill statement - credit/debit getting accrued3. Payments performed in the previous month
###Code
import re
pattern = re.compile("^PAY_[0-9]+$")
payment_delay = [ x for x in df_creditcarddata.columns if (pattern.match(x))]
df_creditcarddata[payment_delay].describe().round(2)
pattern = re.compile("^BILL_AMT[0-9]+$")
bill_columns = [ x for x in df_creditcarddata.columns if (pattern.match(x))]
df_creditcarddata[bill_columns].describe().round(2)
pattern = re.compile("^PAY_AMT[0-9]+$")
pay_amount_columns = [ x for x in df_creditcarddata.columns if (pattern.match(x))]
df_creditcarddata[pay_amount_columns].describe().round(2)
###Output
_____no_output_____
###Markdown
**MODEL BUILDING** **H2O.ai package.**1. It handles categorical variables out of the box without having to do any encoding (make sure the variables are factors).2. Yes, H2O is one of the few machine learning libraries that does not require the user to pre-process or one-hot-encode (aka "dummy-encode") the categorical variables. As long as the column type is "factor" (aka "enum") in your data frame, then H2O knows what to do automatically.3. In particular, H2O allows direct use of categorical variables in tree-based methods like Random Forest or GBM. Tree based algorithms have the ability to use the categorical data natively and typically this leads to better performance than one-hot encoding. If you want more control, you can control the type of automatic encoding using the categorical_encoding argument. **RANDOM FOREST**1. Randomforest can natively handle categoricals, also consequential memory reduction. 2. The default number of trees in an H2O Random Forest is 50, you can set any number you want. Usually increasing the number of trees in an RF will increase performance as well.3. Also, Random Forests are fairly resistant (although not free from) overfitting by increasing the number of trees.
###Code
hf_creditcarddata.types
# Making sure the categorical var are presented as factors
#'SEX', 'EDUCATION', 'MARRIAGE', and repayment status 'PAY_0...........PAY_6'
hf_creditcarddata['SEX'] = hf_creditcarddata['SEX'].asfactor()
hf_creditcarddata['EDUCATION'] = hf_creditcarddata['EDUCATION'].asfactor()
hf_creditcarddata['MARRIAGE'] = hf_creditcarddata['MARRIAGE'].asfactor()
hf_creditcarddata['PAY_0'] = hf_creditcarddata['PAY_0'].asfactor()
hf_creditcarddata['PAY_2'] = hf_creditcarddata['PAY_2'].asfactor()
hf_creditcarddata['PAY_3'] = hf_creditcarddata['PAY_3'].asfactor()
hf_creditcarddata['PAY_4'] = hf_creditcarddata['PAY_4'].asfactor()
hf_creditcarddata['PAY_5'] = hf_creditcarddata['PAY_5'].asfactor()
hf_creditcarddata['PAY_6'] = hf_creditcarddata['PAY_6'].asfactor()
hf_creditcarddata.types
#also, encode the binary response variable as a factor
hf_creditcarddata['default.payment.next.month'] = \
hf_creditcarddata['default.payment.next.month'].asfactor()
hf_creditcarddata['default.payment.next.month'].levels()
# Define features (or predictors) manually # removing 'ID' and target var
predictors = ['LIMIT_BAL','SEX','EDUCATION','MARRIAGE','AGE','PAY_0','PAY_2','PAY_3','PAY_4','PAY_5','PAY_6','BILL_AMT1','BILL_AMT2','BILL_AMT3','BILL_AMT4','BILL_AMT5','BILL_AMT6','PAY_AMT1','PAY_AMT2','PAY_AMT3','PAY_AMT4','PAY_AMT5','PAY_AMT6']
target = 'default.payment.next.month'
# Split the H2O data frame into training/test sets
# so we can evaluate out-of-bag performance
# using 80% for training
# using the rest 20% for out-of-bag evaluation
splits = hf_creditcarddata.split_frame(ratios=[0.7], seed=12345) # using 70% for training
train = splits[0]
test = splits[1] # using the rest 20% for out-of-bag evaluation
###Output
_____no_output_____
###Markdown
** STEP 1 - RF WITH DEFAULT SETTINGS**
###Code
# Build a RF model with default settings
# Import the function for RF
from h2o.estimators.random_forest import H2ORandomForestEstimator
RF_D = H2ORandomForestEstimator(model_id = 'RF_D',seed = 12345)
# Use .train() to build the model
RF_D.train(x = predictors, y = target,training_frame = train)
RF_D.summary()
print(RF_D.model_performance(train))
###Output
ModelMetricsBinomial: drf
** Reported on test data. **
MSE: 0.055915275627163016
RMSE: 0.23646411065352607
LogLoss: 0.2141946437091143
Mean Per-Class Error: 0.06520487248550844
AUC: 0.9856742394659596
Gini: 0.9713484789319191
Confusion Matrix (Act/Pred) for max f1 @ threshold = 0.3041384004450898:
###Markdown
* Log Loss quantifies the accuracy of a classifier - lower the log loss, better the prediction - .21* AUC score - AUC ranges in value from 0 to 1. A model whose predictions are 100% wrong has an AUC of 0.0; one whose predictions are 100% correct has an AUC of 1.0. In this case it is 0.98. i.e. 98%
###Code
print(RF_D.model_performance(test))
###Output
ModelMetricsBinomial: drf
** Reported on test data. **
MSE: 0.13916702510839007
RMSE: 0.3730509685128697
LogLoss: 0.4404003566729752
Mean Per-Class Error: 0.3021721562099693
AUC: 0.7689912083007103
pr_auc: 0.5367690443947648
Gini: 0.5379824166014207
Confusion Matrix (Act/Pred) for max f1 @ threshold = 0.31553293049335485:
|
lab/lab11/.ipynb_checkpoints/student_facing-checkpoint.ipynb | ###Markdown
POLISCI 88 FA 21 Lab : Who Punishes Extremist Nominees? Due Date: X Candidate Ideology and Turning Out the Base in US ElectionsThe behavioral literature in American politics suggests that voters are not informed enough, and are too partisan, to be swing voters, while the institutional literature suggests that moderate candidates tend to perform better. We speak to this debate by examining the link between the ideology of congressional candidates and the turnout of their parties’ bases in US House races, 2006–2014. We will repoduce results from [this](https://www.cambridge.org/core/journals/american-political-science-review/article/who-punishes-extremist-nominees-candidate-ideology-and-turning-out-the-base-in-us-elections/366A518712BE9BCC1CB035BF53095D65) Hall and Thompson paper from the American Political Science Review.Combining a regression discontinuity design in close primary races with survey and administrative data on individual voter turnout, we will look at how extremist nominees effect their party’s share of turnout in the general election. Run the following cell to import the libraries we will explore in this lab assignment.
###Code
import pandas as pd
import numpy as np
from scipy import stats
import statsmodels.formula.api as smf
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1. Reading in the Stata FileRead in the Stata dta file `rd_analysis_hs.dta` into a pandas dataframe with the name: `df`. You might find looking at [the documentation](https://pandas.pydata.org/docs/reference/api/pandas.read_stata.html) helpful.
###Code
#Bring in the main analysis dataset
df = pd.read_stata("hall_thompson_replication_files/rd_analysis_hs.dta")
###Output
_____no_output_____
###Markdown
Now that you've read in the dta file let's try some `pd.DataFrame` methods. Use the `DataFrame.head()` method to show the first few lines of the dataframe. To print out a list of all the column names in the dataframe use the `DataFrame.columns.values` method.
###Code
df.head()
###Output
_____no_output_____
###Markdown
2. Regression Discontinuity DesignSpecifically, we will be replicating Tables 1 and 2 and Figure 2 from the Hall and Thompson paper. Question 2.1Store the cutoff for the minimum distance between the moderate and extremists necessary to enter the sample in the variable `cutoff`.Set `cutoff` equal to the median value of of "absdist" column from the dataframe.Hint: Each column of the `df` table is made up of an array. As a reminder, you can access the arrays that make up the column by writing the name of the data frame followed by the variable name in square brackets as a string.
###Code
cutoff = df["absdist"].median()
cutoff
###Output
_____no_output_____
###Markdown
Question 2.2Let's filter the dataframe to only contain values that are relevant for our regression. We will do this by creating conditions set on our original dataframe `df`.
###Code
df2 = df[np.abs(df.rv)<.1]
df3 = df[df.absdist>cutoff][["vote_general", "victory_general", "turnout_party_share", "treat", "rv", "rv2", "rv3", "rv4", "rv5", "g", "dist"]]
df4 = df[(np.abs(df.rv)<.1) & (df.absdist>cutoff)]
###Output
_____no_output_____
###Markdown
Question 2.3We want to create [deep copies](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.copy.html) of the filtered dataframes. Create new variables for each deep copy and drop all NaN values.
###Code
df_cp = df.copy(deep=True).dropna()
df2_cp = df2.copy(deep=True).dropna()
df3_cp = df3.copy(deep=True).dropna()
df4_cp = df4.copy(deep=True).dropna()
###Output
_____no_output_____
###Markdown
3. Hall Estimates on Vote Share and Electoral VictoryWe first replicate the Hall estimates on vote share. We do this replication because our subsequent analyses will be using a smaller dataset that only includes election years since the beginning of the Cooperative Congressional Election Study (CCES) in 2006.We will also refer back to these vote-share estimates later in interpreting the turnout estimates. Table 1 will present these estimates. Question 3.1: TABLE 1. Effect of Extremist Nominee on Party’s General-Election Vote Share and Victory, US House, 2006–2014In the first column, we use a local linear ordinary least squares (OLS) estimated separately on each side of the discontinuity, using only observations where the primary winner won by ten percentage points or less. In the second column, we use all the data with a third-order polynomial of the running variable. In the third column, we increase this polynomial to fifth-order. In the fourth column, we use the automated bandwidth selection and kernel estimation from Calonico, Cattaneo, and Titiunik (2014).Hint: Refer back to the filtered dataframes we created at the beginning of the lab assignment. Which ones contain data that is relevant for each of the columns needed to create Table 1?
###Code
#Table1: estimates on vote share and victory RD
vote_share = pd.DataFrame()
vote_share["c1"] = smf.ols(formula='vote_general ~ treat + rv + treat_rv', data=df4).fit(cov_type = 'cluster', cov_kwds={'groups': df4['g']}).params
vote_share["c2"] = smf.ols(formula='vote_general ~ treat + rv + rv2 + rv3', data=df3).fit(cov_type = 'cluster', cov_kwds={'groups': df3['g']}).params
vote_share["c3"] = smf.ols(formula='vote_general ~ treat + rv + rv2 + rv3 + rv4 + rv5', data=df3).fit(cov_type = 'cluster', cov_kwds={'groups': df3['g']}).params
vote_share["c4"] = smf.ols(formula='vote_general ~ rv', data=df3).fit(cov_type = 'cluster', cov_kwds={'groups': df3['g']}).params
vote_share
###Output
_____no_output_____
###Markdown
Question 3.2: TABLE 1. (cont.) These first four columns are all on general-election vote share. The final four columns replicate these specifications for electoral victory (an indicator) instead of vote share.
###Code
victory = pd.DataFrame()
victory["c5"] = smf.ols(formula='victory_general ~ treat + rv + treat_rv', data=df4).fit(cov_type = 'cluster', cov_kwds={'groups': df4['g']}).params
victory["c6"] = smf.ols(formula='victory_general ~ treat + rv + rv2 + rv3', data=df3).fit(cov_type = 'cluster', cov_kwds={'groups': df3['g']}).params
victory["c7"] = smf.ols(formula='victory_general ~ treat + rv + rv2 + rv3 + rv4 + rv5', data=df3).fit(cov_type = 'cluster', cov_kwds={'groups': df3['g']}).params
victory["c8"] = smf.ols(formula='victory_general ~ rv', data=df3).fit(cov_type = 'cluster', cov_kwds={'groups': df3['g']}).params
victory
###Output
_____no_output_____
###Markdown
4. Effects on Partisian TurnoutHaving documented the effect of extremist nominees on vote share, we now examine their effect on partisan turnout. In this section we will study the RD graphically. Question 4.1: FIGURE 2. The Effect of Extremist Nominees on Validated Partisan Turnout in the General Election. US House, 2006–2014. When a party nominates an extremist in its primary, general-election turnout skews towards the opposing party. Define a function `rd` to plot Figure 2 that takes in two paramters, a dataframe `data` and a mid value `mid`. The function should plot binned averages of the extremist candidate’s winning margin in each primary race, on the horizontal axis, against the general-election vote share of the primary winner, on the vertical axis. When the winning margin is above 0, to the right of the vertical line in the plot, the extremist candidate from among the top two primary candidates wins the race, and the party fields an extremist in the general election. When the winning margin is below 0, to the left of the vertical line in the plot, the moderate wins and stands in the general election instead. Hint: Plot the data and regression model fits across a FacetGrid using [`sns.lmplot(...)`](https://seaborn.pydata.org/generated/seaborn.lmplot.html). It is intended as a convenient interface to fit regression models across conditional subsets of a dataset. Think about how many calls you will need to make to `sns.lmplot(...)`.
###Code
#Figure2: binned averages of extremist candidate's win margin in each primary (x) against general election vote share of primary winner (y)
def rd(data, mid):
sns.set_style("whitegrid")
g=sns.lmplot(x="rv", y="turnout_party_share", data=data[data.rv<mid], x_bins=10)
plt.axvline(color='r')
ax=sns.lmplot(x="rv", y="turnout_party_share", data=data[data.rv>mid], x_bins=10)
g.set(xlabel="Extremist Win Margin in Primary", ylabel="Party's Share of General-Election Turnout")
g.set(xlim=(-0.2, 0.2), ylim=(0.40, 0.60), xticks=[-0.2, -0.1, 0.0, 0.1, 0.2], yticks=[0.40, 0.45, 0.50, 0.55, 0.60]) #.fig.subplots_adjust(wspace=0.0)
ax.set(xlabel="Extremist Win Margin in Primary", ylabel="Party's Share of General-Election Turnout")
ax.set(xlim=(-0.2, 0.2), ylim=(0.40, 0.60), xticks=[-0.2, -0.1, 0.0, 0.1, 0.2], yticks=[0.40, 0.45, 0.50, 0.55, 0.60]) #.fig.subplots_adjust(wspace=0.0)
plt.text(0.12, 0.590, "N = 171", fontsize=9)
plt.text(0.12, 0.585, "Bin Size = 10", fontsize=9)
plt.show()
rd(data=df, mid=0)
###Output
_____no_output_____
###Markdown
5. Formal Estimates Question 5.1: TABLE 2. Effect of Extremist Nominee on Party’s General-Election Turnout, US House, 2006–2014Table 2 presents the formal estimates, using the same specifications as in the vote share RD. Hint: It might help to refer back to Question 3.1. Think about what part (or column) of the formula will change for this problem.
###Code
#Table2: formal estimates using same specifications as vote share RD
partisan_share_of_turnout = pd.DataFrame()
partisan_share_of_turnout["c1"] = smf.ols(formula="turnout_party_share ~ treat + rv + treat_rv", data=df4_cp).fit(cov_type = 'cluster', cov_kwds={'groups': df4_cp['g']}).params
partisan_share_of_turnout["c2"] = smf.ols(formula='turnout_party_share ~ treat + rv + rv2 + rv3', data=df3_cp).fit(cov_type = 'cluster', cov_kwds={'groups': df3_cp['g']}).params
partisan_share_of_turnout["c3"] = smf.ols(formula='turnout_party_share ~ treat + rv + rv2 + rv3 + rv4 + rv5', data=df3_cp).fit(cov_type = 'cluster', cov_kwds={'groups': df3_cp['g']}).params
partisan_share_of_turnout["c4"] = smf.ols(formula='turnout_party_share ~ rv', data=df3_cp).fit(cov_type = 'cluster', cov_kwds={'groups': df3_cp['g']}).params
partisan_share_of_turnout
###Output
_____no_output_____ |
docs/tutorials/baseline development documentation/(baseline development) PV Installations - Global.ipynb | ###Markdown
This journal documents the manipulation of PV installation data for Global installs. This covers selection of data, and weighting by marketshare.
###Code
cwd = os.getcwd() #grabs current working directory
df_installs_raw = pd.read_csv(cwd+"/../../PV_ICE/baselines/SupportingMaterial/PVInstalls_World_AllSources.csv", index_col='Year')
sources = df_installs_raw.columns
print(len(sources))
plt.plot(df_installs_raw.index,df_installs_raw[sources[0]],lw=4,marker='*',label=sources[0])
plt.plot(df_installs_raw.index,df_installs_raw[sources[1]],lw=3,marker='o',label=sources[1])
plt.plot(df_installs_raw.index,df_installs_raw[sources[2]],lw=2,marker='o',label=sources[2])
plt.plot(df_installs_raw.index,df_installs_raw[sources[3]],lw=2,marker='o',label=sources[3])
plt.yscale('log')
plt.ylabel('PV Installed (MW)')
plt.legend(bbox_to_anchor=(0, 1, 1, 0), loc="lower left")
#plt.plot(df_installs_raw, marker='o')
###Output
_____no_output_____
###Markdown
Select the data to use for installs Based on the above graph, we will utilize Goetzberger data through 2000, then IRENA online query tool after 2000.
###Code
#Before 2000 = Goetz
installs_old = df_installs_raw.loc[(df_installs_raw.index<=2000) & (df_installs_raw.index>=1995)]
installs_old_Goetz = pd.DataFrame(installs_old[sources[3]])
installs_old_Goetz.columns = ['installed_pv_MW']
#After 2000 = IRENA
installs_recent = df_installs_raw.loc[(df_installs_raw.index>2000) & (df_installs_raw.index<2020)]
installs_recent_IRENA = pd.DataFrame(installs_recent[sources[0]])
installs_recent_IRENA.columns = ['installed_pv_MW']
#print(installs_recent_IRENA)
###Output
_____no_output_____
###Markdown
Collect the installation data together into a single df
###Code
installs = pd.concat([installs_old_Goetz,installs_recent_IRENA])
plt.plot(installs)
plt.yscale('log')
plt.title('Installations of PV Globally (MW) since 1995')
###Output
_____no_output_____
###Markdown
Marketshare weight the installation data for percent of Silicon vs Thin Film In addition to compiling a single installation record for 1995 through the present, this data is total cumulative, but the tool it currently considering crystalline silicon technology only (i.e. mono and multi, but not ribbon or amorphous).
###Code
cwd = os.getcwd() #grabs current working directory
df_raw_mrktshr_siVtf = pd.read_csv(cwd+"/../../PV_ICE/baselines/SupportingMaterial/MarketShare_global_c-SiVSthinfilm.csv", index_col='Year')
refs = df_raw_mrktshr_siVtf.columns
print(len(refs))
plt.plot(df_raw_mrktshr_siVtf.index,df_raw_mrktshr_siVtf[refs[0]],marker='o',label=refs[0])
plt.plot(df_raw_mrktshr_siVtf.index,df_raw_mrktshr_siVtf[refs[1]],marker='o',label=refs[1])
plt.plot(df_raw_mrktshr_siVtf.index,df_raw_mrktshr_siVtf[refs[2]],marker='o',label=refs[2])
plt.plot(df_raw_mrktshr_siVtf.index,df_raw_mrktshr_siVtf[refs[3]],marker='o',label=refs[3])
plt.legend(bbox_to_anchor=(0, 1, 1, 0), loc="lower left")
plt.ylim(0,100)
###Output
_____no_output_____
###Markdown
The 2020 Fraunhofer and 2014 Fraunhofer appear to agree reasonably closely, and Mints agrees closely for the amount of time there is overlap. The unknown sourced wikipedia figure doesn't agree until 2010, but given the unknown source, it will be discarded. We will use the Fraunhofer ISE 2020 market share data for the entire time period.
###Code
df_mrktshr_global = pd.DataFrame(df_raw_mrktshr_siVtf[refs[2]])
mrktshr_global = df_mrktshr_global.loc[(df_mrktshr_global.index>=1995) & (df_mrktshr_global.index<2020)]
mrktshr_global.columns = ['Global_MarketShare']
#print(mrktshr_global)
#convert to decimal
mrktshr_global_pct = mrktshr_global/100
plt.plot(mrktshr_global_pct)
plt.title('Global Marketshare of Silicon PV installed since 1995')
plt.ylim(0,1.1)
###Output
_____no_output_____
###Markdown
Marketshare weight PV installs by percent SiliconNow we have a marketshare percentage of silicon for 1995 through 2019. We will multiply the PV installs by this silicon marketshare to get the MW of silicon PV installed globally since 1995.
###Code
#put the two dataframes together, joining for available data (excludes NANs)
dfs = [installs,mrktshr_global_pct]
df = pd.concat(dfs, axis=1, join='inner')
df_clean = df.dropna()
#creates the marketshare weighted c-Si install data
world_si_installs = df_clean.agg('prod', axis='columns')
#print(us_si_installs)
plt.rcParams.update({'font.size': 18})
plt.rcParams['figure.figsize'] = (15, 8)
plt.plot(installs, label='All Global PV Installed', color='orange')
plt.plot(world_si_installs, label='Silicon PV Installed, World', color='blue')
plt.yscale('log')
plt.title('Silicon PV Installations (MW) Globally, 1995 through 2019')
plt.legend()
world_si_installs.to_csv(cwd+'/../../PV_ICE/baselines/SupportingMaterial/output_Global_SiPV_installs.csv', index=True)
###Output
_____no_output_____
###Markdown
This journal documents the manipulation of PV installation data for Global installs. This covers selection of data, and weighting by marketshare.
###Code
cwd = os.getcwd() #grabs current working directory
df_installs_raw = pd.read_csv(cwd+"/../../PV_ICE/baselines/SupportingMaterial/PVInstalls_World_AllSources.csv", index_col='Year')
sources = df_installs_raw.columns
print(len(sources))
plt.plot(df_installs_raw.index,df_installs_raw[sources[0]],lw=4,marker='*',label=sources[0])
plt.plot(df_installs_raw.index,df_installs_raw[sources[1]],lw=3,marker='o',label=sources[1])
plt.plot(df_installs_raw.index,df_installs_raw[sources[2]],lw=2,marker='o',label=sources[2])
plt.plot(df_installs_raw.index,df_installs_raw[sources[3]],lw=2,marker='o',label=sources[3])
plt.yscale('log')
plt.ylabel('PV Installed (MW)')
plt.legend(bbox_to_anchor=(0, 1, 1, 0), loc="lower left")
#plt.plot(df_installs_raw, marker='o')
###Output
_____no_output_____
###Markdown
Select the data to use for installs Based on the above graph, we will utilize Goetzberger data through 2000, then IRENA online query tool after 2000.
###Code
#Before 2000 = Goetz
installs_old = df_installs_raw.loc[(df_installs_raw.index<=2000) & (df_installs_raw.index>=1995)]
installs_old_Goetz = pd.DataFrame(installs_old[sources[3]])
installs_old_Goetz.columns = ['installed_pv_MW']
#After 2000 = IRENA
installs_recent = df_installs_raw.loc[(df_installs_raw.index>2000) & (df_installs_raw.index<2020)]
installs_recent_IRENA = pd.DataFrame(installs_recent[sources[0]])
installs_recent_IRENA.columns = ['installed_pv_MW']
#print(installs_recent_IRENA)
###Output
_____no_output_____
###Markdown
Collect the installation data together into a single df
###Code
installs = pd.concat([installs_old_Goetz,installs_recent_IRENA])
plt.plot(installs)
plt.yscale('log')
plt.title('Installations of PV Globally (MW) since 1995')
###Output
_____no_output_____
###Markdown
Marketshare weight the installation data for percent of Silicon vs Thin Film In addition to compiling a single installation record for 1995 through the present, this data is total cumulative, but the tool it currently considering crystalline silicon technology only (i.e. mono and multi, but not ribbon or amorphous).
###Code
cwd = os.getcwd() #grabs current working directory
df_raw_mrktshr_siVtf = pd.read_csv(cwd+"/../../PV_ICE/baselines/SupportingMaterial/MarketShare_global_c-SiVSthinfilm.csv", index_col='Year')
refs = df_raw_mrktshr_siVtf.columns
print(len(refs))
plt.plot(df_raw_mrktshr_siVtf.index,df_raw_mrktshr_siVtf[refs[0]],marker='o',label=refs[0])
plt.plot(df_raw_mrktshr_siVtf.index,df_raw_mrktshr_siVtf[refs[1]],marker='o',label=refs[1])
plt.plot(df_raw_mrktshr_siVtf.index,df_raw_mrktshr_siVtf[refs[2]],marker='o',label=refs[2])
plt.plot(df_raw_mrktshr_siVtf.index,df_raw_mrktshr_siVtf[refs[3]],marker='o',label=refs[3])
plt.legend(bbox_to_anchor=(0, 1, 1, 0), loc="lower left")
plt.ylim(0,100)
###Output
_____no_output_____
###Markdown
The 2020 Fraunhofer and 2014 Fraunhofer appear to agree reasonably closely, and Mints agrees closely for the amount of time there is overlap. The unknown sourced wikipedia figure doesn't agree until 2010, but given the unknown source, it will be discarded. We will use the Fraunhofer ISE 2020 market share data for the entire time period.
###Code
df_mrktshr_global = pd.DataFrame(df_raw_mrktshr_siVtf[refs[2]])
mrktshr_global = df_mrktshr_global.loc[(df_mrktshr_global.index>=1995) & (df_mrktshr_global.index<2020)]
mrktshr_global.columns = ['Global_MarketShare']
#print(mrktshr_global)
#convert to decimal
mrktshr_global_pct = mrktshr_global/100
plt.plot(mrktshr_global_pct)
plt.title('Global Marketshare of Silicon PV installed since 1995')
plt.ylim(0,1.1)
###Output
_____no_output_____
###Markdown
Marketshare weight PV installs by percent SiliconNow we have a marketshare percentage of silicon for 1995 through 2019. We will multiply the PV installs by this silicon marketshare to get the MW of silicon PV installed globally since 1995.
###Code
#put the two dataframes together, joining for available data (excludes NANs)
dfs = [installs,mrktshr_global_pct]
df = pd.concat(dfs, axis=1, join='inner')
df_clean = df.dropna()
#creates the marketshare weighted c-Si install data
world_si_installs = df_clean.agg('prod', axis='columns')
#print(us_si_installs)
plt.rcParams.update({'font.size': 18})
plt.rcParams['figure.figsize'] = (15, 8)
plt.plot(installs, label='All Global PV Installed', color='orange')
plt.plot(world_si_installs, label='Silicon PV Installed, World', color='blue')
plt.yscale('log')
plt.title('Silicon PV Installations (MW) Globally, 1995 through 2019')
plt.legend()
world_si_installs.to_csv(cwd+'/../../PV_ICE/baselines/SupportingMaterial/output_Global_SiPV_installs.csv', index=True)
###Output
_____no_output_____ |
auto_trainer/auto_trainer.ipynb | ###Markdown
MLRun Auto-Trainer Tutorial This notebook shows how to use the handlers of the MLRun's Auto-trainer.the following handlers are:- `train`- `evaluate`- `predict`All you need is simply **ML model type** and a **dataset**.
###Code
import mlrun
mlrun.get_or_create_project('auto-trainer', context="./", user_project=True)
###Output
_____no_output_____
###Markdown
**Fetching a Dataset** To generate the dataset we used the "gen_class_data" function from the hub, which wraps scikit-learn's [make_classification](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.htmlsklearn-datasets-make-classification). See the link for a description of all parameters.
###Code
DATASET_URL = 'https://s3.wasabisys.com/iguazio/data/function-marketplace-data/xgb_trainer/classifier-data.csv'
mlrun.get_dataitem(DATASET_URL).show()
###Output
_____no_output_____
###Markdown
**Importing the MLhandlers functions from the Marketplace**
###Code
auto_trainer = mlrun.import_function("function.yaml")
###Output
_____no_output_____
###Markdown
**Training a model**Choosing the `train` handler Define task parameters¶* Class parameters should contain the prefix `CLASS_`* Fit parameters should contain the prefix `FIT_`* Predict parameters should contain the prefix `PREDICT_`
###Code
model_class = "sklearn.ensemble.RandomForestClassifier"
additional_parameters = {
"CLASS_max_depth": 8,
}
###Output
_____no_output_____
###Markdown
Running the Training job with the "train" handler
###Code
train_run = auto_trainer.run(
inputs={"dataset": DATASET_URL},
params = {
"model_class": model_class,
"drop_columns": ["feat_0", "feat_2"],
"train_test_split_size": 0.2,
"random_state": 42,
"label_columns": "labels",
"model_name": 'MyModel',
**additional_parameters
},
handler='train',
local=True
)
###Output
_____no_output_____
###Markdown
The result of the train run
###Code
train_run.outputs
train_run.artifact('confusion-matrix').show()
###Output
_____no_output_____
###Markdown
Getting the model for evaluating and predicting
###Code
model_path = train_run.outputs['model']
###Output
_____no_output_____
###Markdown
**Evaluating a model**Choosing the `evaluate` handler
###Code
evaluate_run = auto_trainer.run(
inputs={"dataset": train_run.outputs['test_set']},
params={
"model": model_path,
"drop_columns": ["feat_0", "feat_2"], # Not actually necessary on the test set (already done in the previous step)
"label_columns": "labels",
},
handler="evaluate",
local=True,
)
###Output
_____no_output_____
###Markdown
The result of the evaluate run
###Code
evaluate_run.outputs
###Output
_____no_output_____
###Markdown
**Making a prediction**Choosing the `predict` handler
###Code
predict_run = auto_trainer.run(
inputs={"dataset": DATASET_URL},
params={
"model": model_path,
"drop_columns": ["feat_0", "feat_2"], # Not actually necessary on the test set (already done in the previous step)
"label_columns": "labels",
},
handler="predict",
local=True,
)
###Output
_____no_output_____
###Markdown
Showing the predeiction results
###Code
predict_run.outputs
predict_run.artifact('prediction').show()
###Output
_____no_output_____
###Markdown
MLRun Auto-Trainer Tutorial This notebook shows how to use the handlers of the MLRun's Auto-trainer.the following handlers are:- `train`- `evaluate`- `predict`All you need is simply **ML model type** and a **dataset**.
###Code
import mlrun
mlrun.get_or_create_project('auto-trainer', context="./", user_project=True)
###Output
_____no_output_____
###Markdown
**Fetching a Dataset** To generate the dataset we used the "gen_class_data" function from the hub, which wraps scikit-learn's [make_classification](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.htmlsklearn-datasets-make-classification). See the link for a description of all parameters.
###Code
DATASET_URL = 'https://s3.wasabisys.com/iguazio/data/function-marketplace-data/xgb_trainer/classifier-data.csv'
mlrun.get_dataitem(DATASET_URL).show()
###Output
_____no_output_____
###Markdown
**Importing the MLhandlers functions from the Marketplace**
###Code
auto_trainer = mlrun.import_function("hub://auto_trainer")
###Output
_____no_output_____
###Markdown
**Training a model**Choosing the `train` handler Define task parameters¶* Class parameters should contain the prefix `CLASS_`* Fit parameters should contain the prefix `FIT_`* Predict parameters should contain the prefix `PREDICT_`
###Code
model_class = "sklearn.ensemble.RandomForestClassifier"
additional_parameters = {
"CLASS_max_depth": 8,
}
###Output
_____no_output_____
###Markdown
Running the Training job with the "train" handler
###Code
train_run = auto_trainer.run(
inputs={"dataset": DATASET_URL},
params = {
"model_class": model_class,
"drop_columns": ["feat_0", "feat_2"],
"train_test_split_size": 0.2,
"random_state": 42,
"label_columns": "labels",
"model_name": 'MyModel',
**additional_parameters
},
handler='train',
local=True
)
###Output
_____no_output_____
###Markdown
The result of the train run
###Code
train_run.outputs
train_run.artifact('confusion-matrix').show()
###Output
_____no_output_____
###Markdown
Getting the model for evaluating and predicting
###Code
model_path = train_run.outputs['model']
###Output
_____no_output_____
###Markdown
**Evaluating a model**Choosing the `evaluate` handler
###Code
evaluate_run = auto_trainer.run(
inputs={"dataset": train_run.outputs['test_set']},
params={
"model": model_path,
"drop_columns": ["feat_0", "feat_2"], # Not actually necessary on the test set (already done in the previous step)
"label_columns": "labels",
},
handler="evaluate",
local=True,
)
###Output
_____no_output_____
###Markdown
The result of the evaluate run
###Code
evaluate_run.outputs
###Output
_____no_output_____
###Markdown
**Making a prediction**Choosing the `predict` handler
###Code
predict_run = auto_trainer.run(
inputs={"dataset": DATASET_URL},
params={
"model": model_path,
"drop_columns": ["feat_0", "feat_2"], # Not actually necessary on the test set (already done in the previous step)
"label_columns": "labels",
},
handler="predict",
local=True,
)
###Output
_____no_output_____
###Markdown
Showing the predeiction results
###Code
predict_run.outputs
predict_run.artifact('prediction').show()
###Output
_____no_output_____ |
Curso Pandas/Base de Dados.ipynb | ###Markdown
Relatório de Análise I Importando a Base de Dados
###Code
import pandas as pd
# Importando
pd.read_csv('dados/aluguel.csv', sep = ';')
dados = pd.read_csv('dados/aluguel.csv', sep=';')
dados
type(dados)
dados.info()
dados.head()
dados.head(10)
###Output
_____no_output_____
###Markdown
Informações Gerais sobre a Base de Dados
###Code
dados.dtypes
pd.DataFrame(dados.dtypes)
pd.DataFrame(dados.dtypes, columns = ['Tipos de Dados'])
tipos_de_dados = pd.DataFrame(dados.dtypes, columns = ['Tipos de Dados'])
tipos_de_dados.columns.name = 'Variáveis'
tipos_de_dados
dados.shape
dados.shape[0]
dados.shape[1]
print(f'A base de dados apresenta {dados.shape[0]} registros(imóveis) e {dados.shape[1]} variáveis.')
###Output
A base de dados apresenta 32960 registros(imóveis) e 9 variáveis.
###Markdown
Relatório de Análise I Importando a Base de Dados
###Code
import pandas as pd #importa a biblioteca pandas - a comunidade simplifica como pd
# importando
pd.read_csv('dados/aluguel.csv', sep = ';')
dados = pd.read_csv('dados/aluguel.csv', sep = ';') # Armazena o dataframe em uma variável
dados #faz a leitura da variável
type(dados) #Le o tipo da variável
dados.info() #Visualiza as informações da variável dados
dados
dados.head(10) # Mostra os 10 primeiras linhas da variável
dados
###Output
_____no_output_____
###Markdown
Informações Gerais sobre a Base de Dados
###Code
dados.dtypes # Visualiza os tipos das variáveis
pd.DataFrame(dados.dtypes) # transforma o dtypes em DataFrame
pd.DataFrame(dados.dtypes, columns = ['Tipos de Dados']) # Visualiza as informações em dataframe
tipos_de_dado = pd.DataFrame(dados.dtypes, columns = ['Tipos de Dados']) # Armazena essas informações em uma variável
tipos_de_dado.columns.name = 'Variáveis' # Vai dar nome a coluna
tipos_de_dado # Leitura da Variáveldados.
dados.shape # Comando que le a quantidade de linha , número de variáveis.
print(f'A base de dado apresenta {dados.shape[0]} registros e {dados.shape[1]} variáveis.')
###Output
A base de dado apresenta 32960 registros e 9 variáveis.
|
filters_are_easy.ipynb | ###Markdown
Filters are easy
###Code
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
from scipy.ndimage import convolve
# loading image
img = mpimg.imread('i/super_mario_head.png')
plt.imshow(img)
# cutting the image
eye_brow = img[100:150,150:200,]
plt.imshow(eye_brow,interpolation='nearest')
###Output
_____no_output_____
###Markdown
Kernels
###Code
kernel_edge_detect3 = np.array([[-1.,-1.,-1.],
[-1.,8.,-1.],
[-1.,-1.,-1.]])
kernel_sharpen2 = np.array([[-1.,-1.,-1.],
[-1.,9.,-1.],
[-1.,-1.,-1.]])
kernel_blur = np.array([[1.,1.,1.],
[1.,1.,1.],
[1.,1.,1.]])
img_edge_detect = convolve(eye_brow[:,:,0], kernel_edge_detect3)
img_sharpen = convolve(eye_brow[:,:,0], kernel_sharpen2)
img_blur = convolve(eye_brow[:,:,0], kernel_blur)
# creates sub plots of 15x15
f, (plt1, plt2, plt3, plt4) = plt.subplots(1, 4,figsize=(15,15))
plt1.set_title('Original');plt1.imshow(eye_brow[:,:,0],cmap='gray', interpolation='nearest');
# showing each channel img[x,y,color_plane]
plt2.axis('off');plt2.set_title('Edge detect');plt2.imshow(img_edge_detect,cmap='gray', interpolation='nearest');
plt3.axis('off');plt3.set_title('Sharpen');plt3.imshow(img_sharpen,cmap='gray', interpolation='nearest');
plt4.axis('off');plt4.set_title('Blur');plt4.imshow(img_blur,cmap='gray', interpolation='nearest');
###Output
_____no_output_____ |
kaggle_exercise/autralian_weather_prediction.ipynb | ###Markdown
Australian Weather PredictionThe purpose of this project is to predict whether it will rain tomorrow or not, given independent variables (such as location, max/min temp, windspeed, evaporation, etc.) on a specific day 1. Load Data
###Code
#load dataset
df = pd.read_csv('weatherAUS.csv', parse_dates=['Date'])
df.head()
#replace values under 'RainTomorrow' and 'RainToday'
#No : 0 and Yes : 1
df['RainTomorrow'].replace({'No':0, 'Yes':1}, inplace=True)
df['RainToday'].replace({'No':0, 'Yes':1}, inplace=True)
df.head(3)
###Output
_____no_output_____
###Markdown
2. Exploratory Data AnalysisProcess of elimination and imputation using1. Missing values2. Information Value and WOE 2.1 Examine Missing Values
###Code
#total number of data points
print("total number of data points : " + str(len(df)))
print("no data : " + str(len(df[df['RainTomorrow'] == 0])))
print("yes data : " + str(len(df[df['RainTomorrow'] == 1])))
print("percentage of 'RainTomorrow' from the whole dataset : " + str(31877 / (110316 + 31877)))
#distribution of months
sns.distplot(df['Date'].dt.month, kde=False)
###Output
_____no_output_____
###Markdown
* One assumption in dealing with this dataset is that **year** does not really affect whether it will rain tomorrow or not. However, we will take **month** into consideration.* Month seems to be distributed evenly throughout the whole dataset
###Code
#look for missing values in each column
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Average **WindSpeed3pm** and **WindSpeed9am** to make filling missing value later on easier
###Code
df['WindSpeedAvg'] = (df['WindSpeed3pm'] + df['WindSpeed9am']) / 2
###Output
_____no_output_____
###Markdown
**Missing Value Observation**We have a total of 142,193 observations (data points)1. **Sunshine** : 67,816 out of 142,193 missing (nearly 47%)2. **Evaporation** : 60,843 out of 142,193 missing missing (nearly 43%)3. **Cloud3pm** : 57,094 out of 142,193 missing (nearly 40%)4. **Cloud9am** : 53,657 out of 142,193 missing (nearly 38%)5. **Pressure9am** : 14,014 out of 142,193 missing (nearly 10%)6. **Pressure3pm** : 13,981 out of 142,193 missing (nearly 10%)7. **WindDir9am** : 10,013 out of 142,193 missing (nearly 7%)8. **WindDir3pm** : 3,778 out of 142,193 missing (nearly 2%) 2.2 Information Value and WOE Calculation
###Code
def calculate_iv(df, column, target, max_bin):
bin_df = df[df[column].notnull()][[column, target]].copy()
#if categorical
if bin_df[column].dtype == 'object':
bin_df = bin_df.groupby(column)[target].agg(['count', 'sum'])
else:
bin_df['bin'] = pd.qcut(bin_df[column].rank(method='first'), max_bin)
bin_df = bin_df.groupby('bin')[target].agg(['count', 'sum'])
bin_df.columns = ['total', 'abnormal']
bin_df['normal'] = bin_df['total'] - bin_df['abnormal']
bin_df['normal_dist'] = bin_df['normal'] / sum(bin_df['normal'])
bin_df['abnormal_dist'] = bin_df['abnormal'] / sum(bin_df['abnormal'])
bin_df['woe'] = np.log(bin_df['normal_dist'] / bin_df['abnormal_dist'])
bin_df['iv'] = bin_df['woe'] * (bin_df['normal_dist'] - bin_df['abnormal_dist'])
bin_df.replace([np.inf, -np.inf], 0, inplace=True)
bin_df = bin_df[bin_df['total'] > 0]
iv_val = sum(filter(lambda x: x != float('inf'), bin_df['iv']))
return bin_df, column, iv_val
#calculate IV for the given features
columns = ['Sunshine', 'Evaporation', 'Cloud3pm', 'Cloud9am', 'Pressure9am', 'Pressure3pm', 'WindDir9am'
, 'WindDir3pm', 'Location', 'WindGustDir', 'MaxTemp', 'MinTemp', 'RISK_MM', 'RainToday',
'WindSpeed9am', 'WindSpeed3pm', 'WindSpeedAvg', 'WindGustSpeed']
iv_record = {}
for col in columns:
bin_df, column, iv_val = calculate_iv(df, col, 'RainTomorrow', 10)
iv_record[col] = iv_val
import operator
candidates = sorted(iv_record.items(), key=operator.itemgetter(1), reverse=True)
display(candidates)
###Output
_____no_output_____
###Markdown
* **Sunshine**, **Cloud3pm**, **Cloud9am** have very high Information Value and number of missing values, so we will get rid of these columns* **WindDir9am**, **WindDir3pm**, **WindGustDir** have low Information Value and large missing values so we will get rid of these columns* **Evaporation** also has high number of missing values, but its Information Value seems significant. Let's look into **Evaporation** later
###Code
#delete irrelevent columns
del df['Sunshine']
del df['Cloud3pm']
del df['Cloud9am']
del df['WindDir9am']
del df['WindDir3pm']
del df['WindGustDir']
del df['RISK_MM']
del df['WindSpeed3pm']
del df['WindSpeed9am']
del df['Evaporation']
###Output
_____no_output_____
###Markdown
* Whether it rained today or not has significant WOE, so we will discard any data points where **RainToday** is null
###Code
# drop null RainToday rows
df = df[~df['RainToday'].isnull()]
###Output
_____no_output_____
###Markdown
2.3 Split Training/Test Dataset
###Code
#parse year and month
df['year'] = df['Date'].dt.year
df['month'] = df['Date'].dt.month
#delete Date and Year column
del df['Date']
del df['year']
#current columns
feature_col = list(df.columns)
feature_col.remove('Location')
feature_col.remove('month')
feature_col.remove('RainTomorrow')
feature_col
#drop rows where there are less than 10 non-null values
#with such large missing data in a given column, it will be very hard to impute the missing values
df.dropna(axis=0, subset=feature_col, thresh=10, inplace=True)
#check shape
df.shape
###Output
_____no_output_____
###Markdown
* Lost about **2,787** data points, which we can tolerate given the large number of dataset
###Code
df.isnull().sum()
#check the number of RainTomorrow is 1
len(df[df['RainTomorrow'] == 1])
#set null and alt target values equal
null_train = df[df['RainTomorrow'] == 0].sample(29914)
alt_train = df[df['RainTomorrow'] == 1]
#check shape of null and alt datasets
print(null_train.shape)
print(alt_train.shape)
#concatenate
df = pd.concat([null_train, alt_train], axis=0)
#check new dataset shape
df.shape
#change month datatype
df['month'] = df['month'].astype('object')
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df.loc[:, df.columns != 'RainTomorrow'],
df['RainTomorrow'], stratify=df['RainTomorrow'], test_size=0.3)
#check shapes
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
#check positive to negative ratio in training dataset
sum(y_train) / len(y_train)
#check positive to negative ratio in test dataset
sum(y_test) / len(y_test)
X_train.isnull().sum()
###Output
_____no_output_____
###Markdown
2.4 Data Manipulation
###Code
X_train.isnull().sum()
df_eda = X_train.dropna(how='any')
df_eda.shape
#correlation matrix
plt.figure(figsize = (25,10))
eda_columns = ['MaxTemp', 'MinTemp', 'RainToday', 'Humidity9am', 'Humidity3pm',
'Pressure9am', 'Pressure3pm', 'Temp3pm', 'Temp9am', 'WindGustSpeed', 'WindSpeedAvg']
corr_mat = np.corrcoef(df_eda[eda_columns].values.T)
sns.set(font_scale=1)
full_mat = sns.heatmap(corr_mat, cbar=True, annot=True, square=True,
fmt='.2f', annot_kws={'size': 15},
yticklabels=eda_columns, xticklabels=eda_columns)
plt.yticks(rotation=0)
plt.xticks(rotation=45)
def fill_missing1(training_df, filling_df, group_medium1, group_medium2, group_medium3, missing_feature):
"""
Arguments:
1. training_df : the dataset which is used to calculate missing values
2. filling_df : the dataset which will be imputed using the values calculated through this function
3. group_medium1 : the first medium through which we want to group the values
4. group_medium2 : the second medium through which we want to group the values
5. group_medium3 : the third medium through which we want to group the values
5. missing_feature : the feature that we want to eventually calculate missing values on
ex) fill_humidity_missing(X_train, X_train, 'RainToday', 'Location, 'month', 'Humidity') will do the following:
group by Raintoday Location, and Month, in order, and calculate median Humidity rate from X_train dataset and
fill missing Humidity values in X_train dataset
"""
feature_dict = training_df.groupby([group_medium1, group_medium2, group_medium3])[missing_feature].agg(['median']).to_dict()['median']
missing_index = filling_df.index[filling_df[missing_feature].isnull()].tolist()
for row in missing_index:
filling_df.loc[row, missing_feature] = feature_dict[(filling_df.loc[row, group_medium1],
filling_df.loc[row, group_medium2],
filling_df.loc[row, group_medium3])]
fill_missing1(X_train, X_train, 'RainToday', 'Location', 'month', 'Humidity3pm')
fill_missing1(X_train, X_train, 'RainToday', 'Location', 'month', 'Humidity9am')
fill_missing1(X_train, X_train, 'RainToday', 'Location', 'month', 'MaxTemp')
fill_missing1(X_train, X_train, 'RainToday', 'Location', 'month', 'MinTemp')
fill_missing1(X_train, X_train, 'RainToday', 'Location', 'month', 'Temp9am')
fill_missing1(X_train, X_train, 'RainToday', 'Location', 'month', 'Temp3pm')
def fill_missing2(training_df, filling_df, group_medium1, group_medium2, missing_feature):
"""
Arguments:
1. training_df : the dataset which is used to calculate missing values
2. filling_df : the dataset which will be imputed using the values calculated through this function
3. group_medium1 : the first medium through which we want to group the values
4. group_medium2 : the second medium through which we want to group the values
5. group_medium3 : the third medium through which we want to group the values
5. missing_feature : the feature that we want to eventually calculate missing values on
ex) fill_humidity_missing(X_train, X_train, 'RainToday', 'Location, 'month', 'Humidity') will do the following:
group by Raintoday Location, and Month, in order, and calculate median Humidity rate from X_train dataset and
fill missing Humidity values in X_train dataset
"""
feature_dict = training_df.groupby([group_medium1, group_medium2])[missing_feature].agg(['median']).to_dict()['median']
missing_index = filling_df.index[filling_df[missing_feature].isnull()].tolist()
for row in missing_index:
filling_df.loc[row, missing_feature] = feature_dict[(filling_df.loc[row, group_medium1],
filling_df.loc[row, group_medium2])]
fill_missing2(X_train, X_train, 'RainToday', 'month', 'Pressure3pm')
fill_missing2(X_train, X_train, 'RainToday', 'month', 'Pressure9am')
fill_missing2(X_train, X_train, 'RainToday', 'month', 'WindSpeedAvg')
fill_missing2(X_train, X_train, 'RainToday', 'month', 'Temp3pm')
fill_missing2(X_train, X_train, 'RainToday', 'month', 'Humidity3pm')
X_train.shape
X_train.isnull().sum()
def wind_gust_filling(training_df, filling_df):
training_df['windgustbin'] = pd.cut(training_df['WindSpeedAvg'], 10)
bin_df = training_df.groupby(['windgustbin'])['WindGustSpeed'].agg(['median'])
missing_index = filling_df.index[filling_df['WindGustSpeed'].isnull()].tolist()
for index in missing_index:
for i in range(len(bin_df.index)):
if filling_df.loc[index, 'WindSpeedAvg'] in bin_df.index[i]:
filling_df.loc[index, 'WindGustSpeed'] = bin_df.iloc[i][0]
wind_gust_filling(X_train, X_train)
del X_train['windgustbin']
X_train.isnull().sum()
###Output
_____no_output_____
###Markdown
2.5 Add Dummy Variables
###Code
X_train.dtypes
#divide categorical and numerical features for scaling and making dummy varibles
num_features = []
cate_features = []
for feat in X_train.columns:
if X_train[feat].dtype == 'object':
cate_features.append(feat)
elif feat != 'Target':
num_features.append(feat)
cate_features
train_cate_features = pd.get_dummies(X_train[cate_features])
train_cate_features.head()
###Output
_____no_output_____
###Markdown
2.6 Feature Scaling
###Code
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X_train[num_features])
train_scaled_num_features = scaler.transform(X_train[num_features])
#combine categorical and numberical features
train_x = np.c_[train_scaled_num_features, train_cate_features]
print(train_x.shape)
print(y_train.shape)
###Output
(41879, 72)
(41879,)
###Markdown
3. Modeling
###Code
from sklearn.linear_model import LogisticRegression
logistic = LogisticRegression(random_state=42)
from sklearn.model_selection import cross_val_score
cross_val_score(logistic, train_x, y_train, cv=3, scoring='accuracy')
logistic.fit(train_x, y_train)
###Output
_____no_output_____
###Markdown
4. Test Dataset
###Code
X_test.isnull().sum()
fill_missing1(X_train, X_test, 'RainToday', 'Location', 'month', 'Humidity3pm')
fill_missing1(X_train, X_test, 'RainToday', 'Location', 'month', 'Humidity9am')
fill_missing1(X_train, X_test, 'RainToday', 'Location', 'month', 'MaxTemp')
fill_missing1(X_train, X_test, 'RainToday', 'Location', 'month', 'MinTemp')
fill_missing1(X_train, X_test, 'RainToday', 'Location', 'month', 'Temp9am')
fill_missing1(X_train, X_test, 'RainToday', 'Location', 'month', 'Temp3pm')
fill_missing2(X_train, X_test, 'RainToday', 'month', 'Pressure3pm')
fill_missing2(X_train, X_test, 'RainToday', 'month', 'Pressure9am')
fill_missing2(X_train, X_test, 'RainToday', 'month', 'WindSpeedAvg')
wind_gust_filling(X_train, X_test)
X_test.isnull().sum()
X_test.dtypes
#divide categorical and numerical features for scaling and making dummy varibles
num_features_test = []
cate_features_test = []
for feat in X_test.columns:
if X_test[feat].dtype == 'object':
cate_features_test.append(feat)
elif feat != 'Target':
num_features_test.append(feat)
cate_features_test
test_cate_features = pd.get_dummies(X_test[cate_features_test])
test_cate_features.head()
test_scaled_num_features = scaler.transform(X_test[num_features_test])
test_x = np.c_[test_scaled_num_features, test_cate_features]
print(test_x.shape)
print(y_test.shape)
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
accuracy = accuracy_score(logistic.predict(test_x), y_test)
print('Accuracy :', accuracy)
print(classification_report(y_test, logistic.predict(test_x)))
pd.crosstab(logistic.predict(test_x), y_test,
rownames=['Predict'], colnames=['Actual'], margins=True)
###Output
_____no_output_____ |
examples/reference/elements/matplotlib/VectorField.ipynb | ###Markdown
Title VectorField Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``VectorField`` plot displays velocity vectors as arrows at ``x`` and ``y`` positions with angle and magnitude components (or alternatively ``U``, ``V`` components). The element accepts the usual columnar format passing the ``(x, y, angles, magnitudes)`` components as a tuple, but also handles 2D arrays when supplied as a list instead:
###Code
%%opts VectorField [size_index=3]
x,y = np.mgrid[-10:10,-10:10] * 0.25
sine_rings = np.sin(x**2+y**2)*np.pi+np.pi
exp_falloff = 1/np.exp((x**2+y**2)/8)
vector_data = [x,y,sine_rings, exp_falloff]
hv.VectorField(vector_data)
###Output
_____no_output_____
###Markdown
As you can see above, here the *x* and *y* positions are chosen to make a regular grid and the angles supplied to a ``VectorField`` are expressed in radians. The arrow angles follow a sinsoidal ring pattern, and the arrow lengths fall off exponentially from the center, so this plot has four dimensions of data (direction and length for each *x,y* position).Using the ``%%opts`` cell-magic, we can also use color as a redundant indicator to the direction or magnitude:
###Code
%%opts VectorField [size_index=3] VectorField.A [color_index=2] VectorField.M [color_index=3]
hv.VectorField(vector_data, group='A') + hv.VectorField(vector_data, group='M')
###Output
_____no_output_____
###Markdown
By default the arrows are rescaled to the minimum distance between individual arrows, to disable this rescaling set ``rescale=False`` and adjust the ``scale`` manually (smaller values result in larger arrows). This allows fixed scaling even when plotting arrows in an animation. Here we will vary the arrow angle with a Phase dimension and also add this angle to the magnitude data, showing the arrow angles and magnitudes varying. Due to the fixed scaling we can make out the differences across frames:
###Code
%%opts VectorField [color_index=2 size_index=3 rescale_lengths=False] (scale=4)
hv.HoloMap({phase: hv.VectorField([x, y,(vector_data[2]+phase)%np.pi*2, vector_data[3]+np.abs(phase)])
for phase in np.linspace(-np.pi,np.pi,5)}, kdims='Phase')
###Output
_____no_output_____
###Markdown
Vectors are also often expressed through U and V components we can easily transform these to a magnitude and angle:
###Code
xs, ys = np.arange(0, 2 * np.pi, .2), np.arange(0, 2 * np.pi, .2)
X, Y = np.meshgrid(xs, ys)
U = np.cos(X)
V = np.sin(Y)
# Convert U, V to magnitude and angle
mag = np.sqrt(U**2 + V**2)
angle = (np.pi/2.) - np.arctan2(U/mag, V/mag)
###Output
_____no_output_____
###Markdown
``VectorField`` also allows defining the ``pivot`` point of the vectors. We can for instance define ``pivot='tip'`` to pivot around the tip of the arrow. To make this clearer we will mark the pivot points:
###Code
%%opts VectorField [color_index=3 size_index=3 pivot='tip'] (cmap='fire' scale=0.8) Points (color='black' s=1)
hv.VectorField((xs, ys, angle, mag)) * hv.Points((X.flat, Y.flat))
###Output
_____no_output_____
###Markdown
Title VectorField Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``VectorField`` plot displays velocity vectors as arrows at ``x`` and ``y`` positions with angle and magnitude components (or alternatively ``U``, ``V`` components). The element accepts the usual columnar format passing the ``(x, y, angles, magnitudes)`` components as a tuple:
###Code
%%opts VectorField [magnitude='Magnitude']
x,y = np.mgrid[-10:10,-10:10] * 0.25
sine_rings = np.sin(x**2+y**2)*np.pi+np.pi
exp_falloff = 1/np.exp((x**2+y**2)/8)
vector_data = (x, y, sine_rings, exp_falloff)
hv.VectorField(vector_data)
###Output
_____no_output_____
###Markdown
As you can see above, here the *x* and *y* positions are chosen to make a regular grid and the angles supplied to a ``VectorField`` are expressed in radians. The arrow angles follow a sinsoidal ring pattern, and the arrow lengths fall off exponentially from the center, so this plot has four dimensions of data (direction and length for each *x,y* position).Using the ``%%opts`` cell-magic, we can also use color as a redundant indicator to the direction or magnitude:
###Code
%%opts VectorField [magnitude='Magnitude'] VectorField.A (color='Angle') VectorField.M (color='Magnitude')
hv.VectorField(vector_data, group='A') + hv.VectorField(vector_data, group='M')
###Output
_____no_output_____
###Markdown
By default the magnitudes are rescaled to the minimum distance between individual arrows, to disable this rescaling set ``rescale_lengths=False`` use a dimension value transform to scale the size of the arrows, e.g. below we normalize the `'Magnitude'` dimension values and then scale them by a factor of `0.2`. This allows fixed scaling even when plotting arrows in an animation. Here we will vary the arrow angle with a Phase dimension and also add this angle to the magnitude data, showing the arrow angles and magnitudes varying. Due to the fixed scaling we can make out the differences across frames:
###Code
%%opts VectorField [magnitude=dim('Magnitude').norm()*0.2 rescale_lengths=False] (color='Angle')
hv.HoloMap({phase: hv.VectorField([x, y,(vector_data[2]+phase)%np.pi*2, vector_data[3]+np.abs(phase)])
for phase in np.linspace(-np.pi,np.pi,5)}, kdims='Phase')
###Output
_____no_output_____
###Markdown
Vectors are also often expressed through U and V components we can easily transform these to a magnitude and angle:
###Code
xs, ys = np.arange(0, 2 * np.pi, .2), np.arange(0, 2 * np.pi, .2)
X, Y = np.meshgrid(xs, ys)
U = np.cos(X)
V = np.sin(Y)
# Convert U, V to magnitude and angle
mag = np.sqrt(U**2 + V**2)
angle = (np.pi/2.) - np.arctan2(U/mag, V/mag)
###Output
_____no_output_____
###Markdown
``VectorField`` also allows defining the ``pivot`` point of the vectors. We can for instance define ``pivot='tip'`` to pivot around the tip of the arrow. To make this clearer we will mark the pivot points:
###Code
%%opts VectorField [magnitude='Magnitude' aspect=2 fig_size=300] (pivot='tip' cmap='fire' scale=0.8 color='Magnitude') Points (color='black' s=1)
hv.VectorField((xs, ys, angle, mag)) * hv.Points((X.flat, Y.flat))
###Output
_____no_output_____
###Markdown
Title VectorField Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``VectorField`` plot displays velocity vectors as arrows at ``x`` and ``y`` positions with angle and magnitude components (or alternatively ``U``, ``V`` components). The element accepts the usual columnar format passing the ``(x, y, angles, magnitudes)`` components as a tuple, but also handles 2D arrays when supplied as a list instead:
###Code
%%opts VectorField [size_index=3]
x,y = np.mgrid[-10:10,-10:10] * 0.25
sine_rings = np.sin(x**2+y**2)*np.pi+np.pi
exp_falloff = 1/np.exp((x**2+y**2)/8)
vector_data = [x,y,sine_rings, exp_falloff]
hv.VectorField(vector_data)
###Output
_____no_output_____
###Markdown
As you can see above, here the *x* and *y* positions are chosen to make a regular grid. The arrow angles follow a sinsoidal ring pattern, and the arrow lengths fall off exponentially from the center, so this plot has four dimensions of data (direction and length for each *x,y* position).Using the ``%%opts`` cell-magic, we can also use color as a redundant indicator to the direction or magnitude:
###Code
%%opts VectorField [size_index=3] VectorField.A [color_index=2] VectorField.M [color_index=3]
hv.VectorField(vector_data, group='A') + hv.VectorField(vector_data, group='M')
###Output
_____no_output_____
###Markdown
By default the arrows are rescaled to the minimum distance between individual arrows, to disable this rescaling set ``rescale=False`` and adjust the ``scale`` manually (smaller values result in larger arrows). This allows fixed scaling even when plotting arrows in an animation. Here we will vary the arrow angle with a Phase dimension and also add this angle to the magnitude data, showing the arrow angles and magnitudes varying. Due to the fixed scaling we can make out the differences across frames:
###Code
%%opts VectorField [color_index=2 size_index=3 rescale_lengths=False] (scale=4)
hv.HoloMap({phase: hv.VectorField([x, y,(vector_data[2]+phase)%np.pi*2, vector_data[3]+np.abs(phase)])
for phase in np.linspace(-np.pi,np.pi,5)}, kdims='Phase')
###Output
_____no_output_____
###Markdown
Vectors are also often expressed through U and V components we can easily transform these to a magnitude and angle:
###Code
xs, ys = np.arange(0, 2 * np.pi, .2), np.arange(0, 2 * np.pi, .2)
X, Y = np.meshgrid(xs, ys)
U = np.cos(X)
V = np.sin(Y)
# Convert U, V to magnitude and angle
mag = np.sqrt(U**2 + V**2)
angle = (np.pi/2.) - np.arctan2(U/mag, V/mag)
###Output
_____no_output_____
###Markdown
``VectorField`` also allows defining the ``pivot`` point of the vectors. We can for instance define ``pivot='tip'`` to pivot around the tip of the arrow. To make this clearer we will mark the pivot points:
###Code
%%opts VectorField [color_index=3 size_index=3 pivot='tip'] (cmap='fire' scale=0.8) Points (color='black' s=1)
hv.VectorField((xs, ys, angle, mag)) * hv.Points((X.flat, Y.flat))
###Output
_____no_output_____
###Markdown
Title VectorField Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
from holoviews import dim, opts
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``VectorField`` plot displays velocity vectors as arrows at ``x`` and ``y`` positions with angle and magnitude components (or alternatively ``U``, ``V`` components). The element accepts the usual columnar format passing the ``(x, y, angles, magnitudes)`` components as a tuple:
###Code
x,y = np.mgrid[-10:10,-10:10] * 0.25
sine_rings = np.sin(x**2+y**2)*np.pi+np.pi
exp_falloff = 1/np.exp((x**2+y**2)/8)
vector_data = (x, y, sine_rings, exp_falloff)
hv.VectorField(vector_data).opts(magnitude='Magnitude')
###Output
_____no_output_____
###Markdown
As you can see above, here the *x* and *y* positions are chosen to make a regular grid and the angles supplied to a ``VectorField`` are expressed in radians. The arrow angles follow a sinsoidal ring pattern, and the arrow lengths fall off exponentially from the center, so this plot has four dimensions of data (direction and length for each *x,y* position).Using the ``.opts`` method, we can also use color as a redundant indicator to the direction or magnitude:
###Code
anglecolor = hv.VectorField(vector_data).opts(
opts.VectorField(title='A', magnitude='Magnitude', color='Angle'))
magcolor = hv.VectorField(vector_data).opts(
opts.VectorField(title='M', magnitude='Magnitude', color='Magnitude'))
anglecolor + magcolor
###Output
_____no_output_____
###Markdown
By default the magnitudes are rescaled to the minimum distance between individual arrows, to disable this rescaling set ``rescale_lengths=False`` use a dimension value transform to scale the size of the arrows, e.g. below we normalize the `'Magnitude'` dimension values and then scale them by a factor of `0.2`. This allows fixed scaling even when plotting arrows in an animation. Here we will vary the arrow angle with a Phase dimension and also add this angle to the magnitude data, showing the arrow angles and magnitudes varying. Due to the fixed scaling we can make out the differences across frames:
###Code
hmap = hv.HoloMap({phase: hv.VectorField((x, y,(vector_data[2]+phase)%np.pi*2, vector_data[3]+np.abs(phase)))
for phase in np.linspace(-np.pi,np.pi,5)}, kdims='Phase')
hmap.opts(opts.VectorField(color='Angle', magnitude=dim('Magnitude').norm()*0.2, rescale_lengths=False))
###Output
_____no_output_____
###Markdown
Vectors are also often expressed through U and V components we can easily transform these to a magnitude and angle:
###Code
xs, ys = np.arange(0, 2 * np.pi, .2), np.arange(0, 2 * np.pi, .2)
X, Y = np.meshgrid(xs, ys)
U = np.cos(X)
V = np.sin(Y)
# Convert U, V to magnitude and angle
mag = np.sqrt(U**2 + V**2)
angle = (np.pi/2.) - np.arctan2(U/mag, V/mag)
###Output
_____no_output_____
###Markdown
``VectorField`` also allows defining the ``pivot`` point of the vectors. We can for instance define ``pivot='tip'`` to pivot around the tip of the arrow. To make this clearer we will mark the pivot points:
###Code
vectors = hv.VectorField((xs, ys, angle, mag))
points = hv.Points((X.flat, Y.flat))
(vectors * points).opts(
opts.Points(color='black', s=1),
opts.VectorField(aspect=2, color='Magnitude', cmap='fire', fig_size=300, magnitude='Magnitude', pivot='tip', scale=0.8))
###Output
_____no_output_____
###Markdown
Title VectorField Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``VectorField`` plot displays velocity vectors as arrows at ``x`` and ``y`` positions with angle and magnitude components (or alternatively ``U``, ``V`` components). The element accepts the usual columnar format passing the ``(x, y, angles, magnitudes)`` components as a tuple, but also handles 2D arrays when supplied as a list instead:
###Code
%%opts VectorField [size_index=3]
x,y = np.mgrid[-10:10,-10:10] * 0.25
sine_rings = np.sin(x**2+y**2)*np.pi+np.pi
exp_falloff = 1/np.exp((x**2+y**2)/8)
vector_data = [x,y,sine_rings, exp_falloff]
hv.VectorField(vector_data)
###Output
_____no_output_____
###Markdown
As you can see above, here the *x* and *y* positions are chosen to make a regular grid. The arrow angles follow a sinsoidal ring pattern, and the arrow lengths fall off exponentially from the center, so this plot has four dimensions of data (direction and length for each *x,y* position).Using the ``%%opts`` cell-magic, we can also use color as a redundant indicator to the direction or magnitude:
###Code
%%opts VectorField [size_index=3] VectorField.A [color_index=2] VectorField.M [color_index=3]
hv.VectorField(vector_data, group='A') + hv.VectorField(vector_data, group='M')
###Output
_____no_output_____
###Markdown
By default the arrows are rescaled to the minimum distance between individual arrows, to disable this rescaling set ``rescale=False`` and adjust the ``scale`` manually (smaller values result in larger arrows). This allows fixed scaling even when plotting arrows in an animation. Here we will vary the arrow angle with a Phase dimension and also add this angle to the magnitude data, showing the arrow angles and magnitudes varying. Due to the fixed scaling we can make out the differences across frames:
###Code
%%opts VectorField [color_index=2 size_index=3 rescale_lengths=False] (scale=4)
hv.HoloMap({phase: hv.VectorField([x, y,(vector_data[2]+phase)%np.pi*2, vector_data[3]+np.abs(phase)])
for phase in np.linspace(-np.pi,np.pi,5)}, kdims='Phase')
###Output
_____no_output_____
###Markdown
Vectors are also often expressed through U and V components we can easily transform these to a magnitude and angle:
###Code
xs, ys = np.arange(0, 2 * np.pi, .2), np.arange(0, 2 * np.pi, .2)
X, Y = np.meshgrid(xs, ys)
U = np.cos(X)
V = np.sin(Y)
# Convert U, V to magnitude and angle
mag = np.sqrt(U**2 + V**2)
angle = (np.pi/2.) - np.arctan2(U/mag, V/mag)
###Output
_____no_output_____
###Markdown
``VectorField`` also allows defining the ``pivot`` point of the vectors. We can for instance define ``pivot='tip'`` to pivot around the tip of the arrow. To make this clearer we will mark the pivot points:
###Code
%%opts VectorField [color_index=3 size_index=3 pivot='tip'] (cmap='fire' scale=0.8) Points (color='black' s=1)
hv.VectorField((xs, ys, angle, mag)) * hv.Points((X.flat, Y.flat))
###Output
_____no_output_____
###Markdown
Title VectorField Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``VectorField`` plot displays velocity vectors as arrows at ``x`` and ``y`` positions with angle and magnitude components (or alternatively ``U``, ``V`` components). The element accepts the usual columnar format passing the ``(x, y, angles, magnitudes)`` components as a tuple, but also handles 2D arrays when supplied as a list instead:
###Code
%%opts VectorField [size_index=3]
x,y = np.mgrid[-10:10,-10:10] * 0.25
sine_rings = np.sin(x**2+y**2)*np.pi+np.pi
exp_falloff = 1/np.exp((x**2+y**2)/8)
vector_data = [x,y,sine_rings, exp_falloff]
hv.VectorField(vector_data)
###Output
_____no_output_____
###Markdown
As you can see above, here the *x* and *y* positions are chosen to make a regular grid. The arrow angles follow a sinsoidal ring pattern, and the arrow lengths fall off exponentially from the center, so this plot has four dimensions of data (direction and length for each *x,y* position).Using the ``%%opts`` cell-magic, we can also use color as a redundant indicator to the direction or magnitude:
###Code
%%opts VectorField [size_index=3] VectorField.A [color_index=2] VectorField.M [color_index=3]
hv.VectorField(vector_data, group='A') + hv.VectorField(vector_data, group='M')
###Output
_____no_output_____
###Markdown
By default the arrows are rescaled to the minimum distance between individual arrows, to disable this rescaling set ``rescale=False`` and adjust the ``scale`` manually (smaller values result in larger arrows). This allows fixed scaling even when plotting arrows in an animation. Here we will vary the arrow angle with a Phase dimension and also add this angle to the magnitude data, showing the arrow angles and magnitudes varying. Due to the fixed scaling we can make out the differences across frames:
###Code
%%opts VectorField [color_index=2 size_index=3 rescale_lengths=False] (scale=4)
hv.HoloMap({phase: hv.VectorField([x, y,(vector_data[2]+phase)%np.pi*2, vector_data[3]+np.abs(phase)])
for phase in np.linspace(-np.pi,np.pi,5)}, kdims=['Phase'])
###Output
_____no_output_____
###Markdown
Vectors are also often expressed through U and V components we can easily transform these to a magnitude and angle:
###Code
xs, ys = np.arange(0, 2 * np.pi, .2), np.arange(0, 2 * np.pi, .2)
X, Y = np.meshgrid(xs, ys)
U = np.cos(X)
V = np.sin(Y)
# Convert U, V to magnitude and angle
mag = np.sqrt(U**2 + V**2)
angle = (np.pi/2.) - np.arctan2(U/mag, V/mag)
###Output
_____no_output_____
###Markdown
``VectorField`` also allows defining the ``pivot`` point of the vectors. We can for instance define ``pivot='tip'`` to pivot around the tip of the arrow. To make this clearer we will mark the pivot points:
###Code
%%opts VectorField [color_index=3 size_index=3 pivot='tip'] (cmap='fire' scale=0.8) Points (color='black' s=1)
hv.VectorField((xs, ys, angle, mag)) * hv.Points((X.flat, Y.flat))
###Output
_____no_output_____ |
Example_notebooks/ClassyTree_Classical.ipynb | ###Markdown
Calculate ClassyTree UniFrac distances (classical workflow) Download buckettable from GNPS Replace Job ID below with your GNPS job ID:
###Code
!curl -d "" 'https://gnps.ucsd.edu/ProteoSAFe/DownloadResult?task=b76dd5a123e54a7eb42765499f9163a5&view=download_cluster_buckettable' -o GNPS_Buckettable.zip
!unzip -d GNPS_Buckettable/ GNPS_Buckettable.zip
###Output
Archive: GNPS_Buckettable.zip
inflating: GNPS_Buckettable/METABOLOMICS-SNETS-V2-b76dd5a1-download_cluster_buckettable-main.tsv
inflating: GNPS_Buckettable/params.xml
inflating: GNPS_Buckettable/clusterinfo/99f5516ab61046ec8c8a0c8f035a2880.clusterinfo
inflating: GNPS_Buckettable/clusterinfosummarygroup_attributes_withIDs_withcomponentID/5207ac30d6054805bf09e9a49538be08.clustersummary
inflating: GNPS_Buckettable/networkedges_selfloop/6f89d6e019364eaba19c2f237fc503db..selfloop
inflating: GNPS_Buckettable/result_specnets_DB/562ad714cb0c425c8cd7c00ab4472463.tsv
inflating: GNPS_Buckettable/groupmapping_converted/38eb2ddbac514d7384f1ca901558bf8a.group
###Markdown
load libraries
###Code
import pandas as pd
import os
import MetaboDistTrees
cf = pd.read_csv("../MetaboDistTrees/data/ClassyFireResults_Network_Classical.txt", sep = '\t')
set(cf.CF_kingdom)
cf.head()
lev = ['CF_class','CF_subclass', 'CF_Dparent','cluster.index']
bt_path = 'GNPS_Buckettable/' + [x for x in os.listdir('GNPS_Buckettable/') if 'METABOLOMICS' in x][0]
bt = pd.read_csv(bt_path, sep = '\t')
bt.head()
MetaboDistTrees.get_classytrees(cf,bt,lev,method='average', metric='jaccard', outputdir = 'ClassyTree/')
md = pd.read_csv("../MetaboDistTrees/data/Metadata_DrugMetabolism_Example.txt", sep = "\t")
md.head()
set(bt.columns) - set(md['#SampleID'])
###Output
_____no_output_____
###Markdown
Calculate UniFrac distances using Qiime2 make sure to run this part within your qiime2 environment
###Code
import qiime2 as q2
import os
path = '/Users/madeleineernst/anaconda3/envs/qiime2-2018.11/bin/' # define path to qiime2 conda environment
os.environ['PATH'] += ':'+path
! biom convert \
-i ClassyTree/Buckettable_ChemicalClasses.tsv \
-o ClassyTree/Buckettable_ChemicalClasses.biom \
--table-type="OTU table" --to-hdf5
! qiime tools import --type 'FeatureTable[Frequency]' \
--input-path ClassyTree/Buckettable_ChemicalClasses.biom \
--output-path ClassyTree/Buckettable_ChemicalClasses.qza
! qiime tools import --type 'Phylogeny[Rooted]' \
--input-path ClassyTree/NewickTree_cluster.index.txt \
--output-path ClassyTree/NewickTree_ChemicalClasses.qza
###Output
[32mImported ClassyTree/NewickTree_cluster.index.txt as NewickDirectoryFormat to ClassyTree/NewickTree_ChemicalClasses.qza[0m
###Markdown
weighted UniFrac
###Code
! qiime diversity beta-phylogenetic \
--i-table ClassyTree/Buckettable_ChemicalClasses.qza \
--i-phylogeny ClassyTree/NewickTree_ChemicalClasses.qza \
--p-metric weighted_unifrac \
--o-distance-matrix ClassyTree/weighted_unifrac_distance_matrix_ChemicalClasses.qza
! qiime diversity pcoa \
--i-distance-matrix ClassyTree/weighted_unifrac_distance_matrix_ChemicalClasses.qza \
--o-pcoa ClassyTree/weighted_unifrac_distance_matrix_ChemicalClasses_PCoA.qza
! qiime emperor plot \
--i-pcoa ClassyTree/weighted_unifrac_distance_matrix_ChemicalClasses_PCoA.qza \
--m-metadata-file ../MetaboDistTrees/data/Metadata_DrugMetabolism_Example.txt \
--o-visualization ClassyTree/wClassyTreeUniFrac.qzv
q2.Visualization.load('ClassyTree/wClassyTreeUniFrac.qzv')
###Output
_____no_output_____
###Markdown
unweighted UniFrac
###Code
! qiime diversity beta-phylogenetic \
--i-table ClassyTree/Buckettable_ChemicalClasses.qza \
--i-phylogeny ClassyTree/NewickTree_ChemicalClasses.qza \
--p-metric unweighted_unifrac \
--o-distance-matrix ClassyTree/unweighted_unifrac_distance_matrix_ChemicalClasses.qza
! qiime diversity pcoa \
--i-distance-matrix ClassyTree/unweighted_unifrac_distance_matrix_ChemicalClasses.qza \
--o-pcoa ClassyTree/unweighted_unifrac_distance_matrix_ChemicalClasses_PCoA.qza
! qiime emperor plot \
--i-pcoa ClassyTree/unweighted_unifrac_distance_matrix_ChemicalClasses_PCoA.qza \
--m-metadata-file ../MetaboDistTrees/data/Metadata_DrugMetabolism_Example.txt \
--o-visualization ClassyTree/uwClassyTreeUniFrac.qzv
q2.Visualization.load('ClassyTree/uwClassyTreeUniFrac.qzv')
###Output
_____no_output_____ |
python/hexstring-bytearray/py-hexstring-bytearray.ipynb | ###Markdown
Hex string into bytearray 有时候为了直接的显示文件内容信息,可以通过转化不同的内容形式。 Python中的bytearray类就提供了不同形式的转化,我们可以练习一下。 for instance 转化hex子串为byte对象。
###Code
s = 'c072ce9fa00ee000'
tmp = bytearray.fromhex(s)
print(tmp)
###Output
bytearray(b'\xc0r\xce\x9f\xa0\x0e\xe0\x00')
|
docs/notebooks/ala2_deeptica_multithermal.ipynb | ###Markdown
Deep-TICA - Alanine, multithermalThis is a brief tutorial that describes how to train the Deep-TICA CVs from a biased simulation. We use a OPES-multithermal simulation of alanine dipeptide as example, using all distances between heavy atoms as input descriptors.This example is taken from: _Bonati, Piccini and Parrinello, Deep learning the slow modes for rare events sampling, PNAS (2021)_. Import
###Code
import torch
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mlcvs.utils.io import load_dataframe
from mlcvs.tica import DeepTICA_CV
###Output
_____no_output_____
###Markdown
Load data We use the function `load_dataframe` from `mlcvs.utils` that takes a PLUMED COLVAR file and returns a pandas dataframe. First we load the COLVAR file containing the data about the OPES-multicanonical simulation and then the COLVAR_DRIVER which contains the input descriptors calculated along the trajectories (the second step can be avoided if they are already present in the first file).We also discard the initial part of the simulation to put ourselves in the quasi-static regime, which is very quickly reached by OPES (it can be monitored by looking at the DELTAFS file for the multicanonical simulation).---| Parameter | Type | Description || :- | :- | :- || folder | str |files location || from_time | float | discard initial part of the trajectories [time units] || descriptor_filter | str |filter descriptors based on names |
###Code
#------------- PARAMETERS -------------
folder = 'data/ala2_multithermal/'
from_time = 15000
descriptor_filter = 'd'
#--------------------------------------
# Load data
opes = load_dataframe(folder+"COLVAR", start=from_time)
features = load_dataframe(folder+"COLVAR_DRIVER", start=from_time)
colvar = pd.concat([opes, features[[i for i in features.columns if (i not in opes.columns)]]], axis=1)
# Select descriptors
X = colvar.filter(regex=descriptor_filter).values
n_input = X.shape[1]
print(X.shape)
###Output
(35001, 45)
###Markdown
Compute weights for time rescaling Here we extract the time $t$, the energy $E$ (needed for the multicanonical reweight [1]) and the bias $V$ from the COLVAR file. We then calculate the weights as:\begin{cases} w = e^{\beta\ V + (\beta_0-\beta)\ E} & \text{if multicanonical}\\ w = e^{\beta\ V} & \text{otherwise}\end{cases}NB: note that if simulation temperature $\beta_0$ is equal to the reweighting one $\beta$ the multicanonical reweight coincides with the standard umbrella sampling-like case.Once we have computed the weights, we rescale the time at step $k$ by using the instantaneus acceleration:$$ dt'_k = w_k\ dt $$and then compute the cumulative rescaled time:$$ t'_k = \sum_{i=0} ^k dt'_i $$[1] Invernizzi, Piaggi, and Parrinello. "Unified approach to enhanced sampling." _Physical Review X_ 10.4 (2020): 041034.---| Parameter | Type | Description || :- | :- | :- || multicanonical | bool | flag to determine if using a standard reweight (false) or a multicanonical one (true) || temp | float | reweighting temperature || temp0 | float | simulation temperature (only needed if multicanonical == True) |
###Code
#------------- PARAMETERS -------------
multicanonical = True
temp = 300.
temp0 = 300.
#--------------------------------------
# Calculate inverse temperature
kb=0.008314
beta=1./(kb*temp)
beta0=1./(kb*temp0)
# Extract cvs from df
t = colvar['time'].values # save time
ene = colvar['ene'].values.astype(np.float64) # store energy as long double
bias = colvar.filter(regex='.bias').values.sum(axis=1) # Load *.bias columns and sum them
# Compute log-weights for time reweighting
logweight = beta*bias
if multicanonical:
ene -= np.mean(ene) #first shift energy by its mean value
logweight += (beta0-beta)*ene
###Output
_____no_output_____
###Markdown
Create dataset of time-lagged configurations In order to train the Deep-TICA CVs we will need to compute the time-lagged covariance matrices in the rescaled time $t'$. The standard way is to look for configurations which are distant a lag-time $\tau$ in the time series. However, in the rescaled time the time-series is _exponentially_ unevenly spaced. Hence, a naive search will lead to severe numerical issue. To address this, we use the algorithm proposed in [2]. In a nutshell, this method assume that the observable $O(t'_k)$ have the same value from scaled time $t'_k$ to $t'_{k+1}$. This leads to weighting each pair of configurations based both on the rescaled time around $t'_k$ and $t'_k+\tau$ (see supp. information of [2] for details). All of this is done under the hood by the function `mlcvs.utils.data.find_time_lagged_configurations`. To generate the the training and validation set, we use the function `create_time_lagged_dataset` which searches for the pairs of configurations and the corresponding weight. This is divided in training and validation data, and fed to a `FastTensorDataloader` which allows for an efficient training of the model.[2] Yang and Parrinello. "Refining collective coordinates and improving free energy representation in variational enhanced sampling." _Journal of chemical theory and computation_ 14.6 (2018): 2889-2894.---| Parameter | Type | Description || :- | :- | :- || lag_time | float | lag_time for the calculation of the covariance matrices [in rescaled time] || n_train | int | number of training configurations || n_valid | int | number of validation configurations |
###Code
from mlcvs.utils.data import create_time_lagged_dataset, FastTensorDataLoader
from torch.utils.data import Subset
#------------- PARAMETERS -------------
lag_time = 0.1
n_train = 25000
n_valid = 10000
#--------------------------------------
# create dataset
dataset = create_time_lagged_dataset(X,t=t,lag_time=lag_time,logweights=logweight)
# split train - valid
train_data = Subset(dataset, np.arange(2*n_train))
valid_data = Subset(dataset, np.arange(2*n_train,len(dataset)))
# create dataloaders
train_loader = FastTensorDataLoader(train_data, batch_size=len(train_data))
valid_loader = FastTensorDataLoader(valid_data, batch_size=len(valid_data))
print('Time-lagged pairs:\t',len(dataset))
print('Training data:\t\t',len(train_data))
print('Validation data:\t',len(valid_data))
###Output
Time-lagged pairs: 69998
Training data: 50000
Validation data: 19998
###Markdown
Training Here we setup a few parameters and then train the Deep-TICA CVs. We first instantiate a object of the `DeepTICA_CV` class which defines the NN but also the loss function and a train loop. See class documentation for further details about parameters and methods.---| Parameter | Type | Description || :- | :- | :- || **Neural network** || nodes | list | NN architecture (last value equal to the number of hidden layers which are input of TICA) || activ_type | string | Activation function (relu,tanh,elu,linear) || loss_type | string | Loss function operating on the TICA eigenvalues (sum2,sum,single)|| n_eig | int | Number of eigenvalues to optimize (or if loss_type=single which one to select) || **Optimization** || lrate | float | Learning rate || l2_reg | float | L2 regularization || num_epochs | int | Number of epochs || **Early Stopping** || earlystop | bool | Whether to use early stopping based on validation loss || es_patience | int | Number of epochs before stopping || es_consecutive | bool | Whether es_patience should count consecutive (True) or cumulative patience || **Log** || log_every | int | How often print the train/valid loss during training |
###Code
#------------- PARAMETERS -------------
nodes = [n_input,30,30,2]
activ_type = 'tanh'
loss_type = 'sum2'
n_eig = nodes[-1]
lrate = 1e-3
l2_reg = 0.
num_epochs = 1000
earlystop = True
es_patience = 10
es_consecutive = False
log_every = 10
#--------------------------------------
# DEVICE
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# MODEL
model = DeepTICA_CV(nodes)
model.to(device)
# OPTIMIZER
opt = torch.optim.Adam(model.parameters(), lr=lrate, weight_decay=l2_reg)
# REGULARIZATION
model.set_optimizer(opt)
model.set_earlystopping(patience=es_patience,min_delta=0.,consecutive=es_consecutive, save_best_model=True, log=False)
# TRAIN
model.fit(train_loader,valid_loader,
standardize_inputs=True,
standardize_outputs=True,
loss_type=loss_type,
n_eig=n_eig,
nepochs=num_epochs,
info=False, log_every=log_every)
## move the model back to cpu for convenience
model.to('cpu')
###Output
_____no_output_____
###Markdown
In the next cells we analyze training and the resulting Deep-TICA CVs. Plot learning curve
###Code
fig, axs = plt.subplots(1,2,figsize=(12,5),dpi=100)
loss_train = [x.cpu() for x in model.loss_train]
loss_valid = [x.cpu() for x in model.loss_valid]
# Loss function
ax = axs[0]
ax.plot(loss_train,'-',label='Train')
ax.plot(loss_valid,'--',label='Valid')
ax.set_ylabel('Loss Function')
# Eigenvalues vs epoch
ax = axs[1]
with torch.no_grad():
evals_train = np.asarray(torch.cat(model.evals_train).cpu())
for i in range(n_eig):
ax.plot(evals_train[:,i],label='Eig. '+str(i+1))
ax.set_ylabel('Eigenvalues')
# Common setup
for ax in axs:
if model.earlystopping_.early_stop:
ax.axvline(model.earlystopping_.best_epoch,ls='dotted',color='grey',alpha=0.5,label='Early Stopping')
ax.set_xlabel('#Epochs')
ax.legend(ncol=2)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Plot CVs isolines in $\phi-\psi$ space
###Code
# Hexbin plot in physical space
fig,axs = plt.subplots(1,n_eig,figsize=(6*n_eig,5),dpi=100)
x = colvar['phi'].values
y = colvar['psi'].values
# compute cvs
with torch.no_grad():
s = model(torch.Tensor(X)).numpy()
for i,ax in enumerate(axs):
pp = ax.hexbin(x,y,C=s[:,i],gridsize=150,cmap='fessa')
cbar = plt.colorbar(pp,ax=ax)
ax.set_title('Deep-TICA '+str(i+1))
ax.set_xlabel(r'$\phi$ [rad]')
ax.set_ylabel(r'$\psi$ [rad]')
cbar.set_label('Deep-TICA '+str(i+1))
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Plot FES
###Code
from mlcvs.utils.fes import compute_fes
fig,axs = plt.subplots(1,n_eig,figsize=(6*n_eig,5),dpi=100)
for i in range(n_eig):
fes,grid,bounds,error = compute_fes(s[:,i], weights=np.exp(logweight),
blocks=3,
bandwidth=0.02,scale_by='range',
plot=True, plot_max_fes=100, ax = axs[i])
axs[i].set_xlabel('Deep-TICA '+str(i+1))
###Output
_____no_output_____ |
Data_prep_scripts/Spacy.ipynb | ###Markdown
Import Libraries
###Code
import gensim
import nltk
import json
import pandas as pd
import numpy as np
import spacy
from spacy import displacy
from collections import Counter
import en_core_web_md
###Output
_____no_output_____
###Markdown
Import Train Data
###Code
with open ('train-v2.0.json') as f:
train = json.load(f)
text = train['data'][5]['paragraphs'][0]['context']
text
###Output
_____no_output_____
###Markdown
Convert text into Sentences
###Code
sentences = text.split('.')
sentences
del(sentences[-1])
sentences
###Output
_____no_output_____
###Markdown
NER
###Code
nlp = en_core_web_md.load()
doc = nlp(text)
print([(X.text, X.label_) for X in doc.ents])
###Output
[('2015', 'DATE'), ('twenty-fourth', 'QUANTITY'), ('James Bond', 'PERSON'), ('Eon Productions', 'ORG'), ('Daniel Craig', 'PERSON'), ('fourth', 'ORDINAL'), ('James Bond', 'PERSON'), ('Christoph Waltz', 'PERSON'), ('Ernst Stavro Blofeld', 'PERSON'), ('Sam Mendes', 'PERSON'), ('second', 'ORDINAL'), ('James Bond', 'PERSON'), ('Skyfall', 'DATE'), ('John Logan', 'PERSON'), ('Neal Purvis', 'PERSON'), ('Robert Wade', 'PERSON'), ('Jez Butterworth', 'PERSON'), ('Metro-Goldwyn-Mayer', 'ORG'), ('Columbia Pictures', 'ORG'), ('around $245 million', 'MONEY'), ('one', 'CARDINAL')]
|
notebooks/Compute Darksky Covmat.ipynb | ###Markdown
I computed realizations of multiple HODs for a few statistics in the darksky boxes. This notebook is gonna combine them into a jackknife covmat. It'll also add some estimate of the shape noise contribution.
###Code
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
import matplotlib.colors as colors
cmap = sns.diverging_palette(240, 10, n=7, as_cmap = True)
import numpy as np
from glob import glob
from os import path
from copy import deepcopy
#shape_noise_covmat = np.load('/u/ki/swmclau2/Git/pearce/bin/covmat/shape_noise.npy')
shape_noise_covmat = np.load('./Hankel_transform/shape_noise.npy')
print np.sqrt(np.diag(shape_noise_covmat))
darksky_h = 0.7036893781978598
output_dir = '/home/users/swmclau2/Git/pearce/bin/covmat/ds14_covmat_v3/'
outputs = sorted(glob(path.join(output_dir, 'wp_ds_cic_darksky_obs_???.npy')))
print len(outputs)
N = len(outputs) # Should be 512, but a few may not have finished. Should make sure that those get reestarted, but likely not super important
all_outputs = np.zeros((N, 5, 2*18 + 14)) # num bins and num HODs
2*18+14
for i,output_file in enumerate(outputs):
if i == 0:
continue
output = np.load(output_file)
all_outputs[i] = output#.mean(axis = 0)
all_outputs.shape
# undo a little h error of mine.
# WARNING i've since corrected this so it will no longer be necessary with new computations
#all_outputs[:, :, 18:36]*=darksky_h**2
# I'm waiting for my v3 calculation to finish. In the interim i will just divide out the h scaling of wp (ds is already corrected)
#all_outputs[:, :, :18]*=darksky_h**2
rp_bins = np.logspace(-1.0, 1.6, 19)
cic_bins = np.round(np.r_[np.linspace(1, 9, 8), np.round(np.logspace(1,2, 7))] )
cic_bins
rp_points = (rp_bins[1:]+rp_bins[:-1])/2.0
cic_points = (cic_bins[1:]+cic_bins[:-1])/2.0
all_outputs[:,4,:18]
for hod_idx in xrange(5):
color = 'b'
plt.plot(rp_points, (all_outputs[:,hod_idx, :18]).T, alpha = 0.1, color = color)
plt.loglog();
plt.show();
for hod_idx in xrange(4):
plt.plot(rp_points, (all_outputs[:,hod_idx, 18:36]).T, alpha = 0.1, color = 'g')
plt.loglog();
plt.show();
for hod_idx in xrange(4):
plt.plot(cic_points, all_outputs[:, hod_idx, 36:].T, alpha = 0.1, color = 'r')
plt.loglog();
plt.show();
nonzero_idxs =np.all(np.all(all_outputs!=0.0, axis = 2), axis=1)
#mean = all_outputs.mean(axis = 0)
###Output
_____no_output_____
###Markdown
plt.plot(rp_points, (all_outputs[:,hod_idx, :18]/mean[hod_idx,:18]).T, alpha = 0.1, color = 'b')plt.xscale('log');plt.show();plt.plot(rp_points, (all_outputs[:,hod_idx, 18:36]/mean[hod_idx, 18:36]).T, alpha = 0.1, color = 'g')plt.xscale('log');plt.show();plt.plot(cic_points, (all_outputs[:, hod_idx, 36:]/mean[hod_idx, 36:]).T, alpha = 0.1, color = 'r')plt.xscale('log');plt.ylim(1e-1, 2)plt.show(); mean = all_outputs.mean(axis = 0)R =(all_outputs - mean)cov = np.zeros((R.shape[2], R.shape[2]))for i in xrange(R.shape[1]): cov+= R[:,i].T.dot(R[:,i])/(N-1) cov/=(R.shape[1]*(N-1))cov/=(N-1)
###Code
def cov_to_corr(cov):
std = np.sqrt(np.diag(cov))
denom = np.outer(std, std)
return cov/denom
np.zeros(len(cic_bins)-1)
# from my HOD mock on MDPL2
# for the time being, a place holder with the right h scaling
import numpy as np
wp_hod = np.load('/home/users/swmclau2/Git/pearce/bin/mock_measurements/HOD5mock_wp.npy')
ds_hod = np.load('/home/users/swmclau2/Git/pearce/bin/mock_measurements/HOD5mock_ds.npy')
planck_y = np.r_[wp_hod, ds_hod, np.zeros(len(cic_bins)-1)]
# from my HOD mock on MDPL2
# for the time being, a place holder with the right h scaling
wp_hod2 = np.load('/home/users/swmclau2/Git/pearce/bin/mock_measurements/HOD4mock_wp.npy')
ds_hod2 = np.load('/home/users/swmclau2/Git/pearce/bin/mock_measurements/HOD4mock_ds.npy')
#planck_y = np.r_[wp_hod, ds_hod, np.zeros(len(cic_bins)-1)]
planck_y.shape
mean = all_outputs[nonzero_idxs].mean(axis = 0)
R =(all_outputs[nonzero_idxs] - mean)
corr = np.zeros((R.shape[2], R.shape[2]))
yerr_ratio = np.zeros((R.shape[2]))
for i in xrange(R.shape[1]):
c= R[:,i].T.dot(R[:,i])/(N-1)
corr += cov_to_corr(c)
yerr_ratio += np.sqrt(np.diag(c))/mean[i]
corr/= (mean.shape[0])#*(N-1)
yerr_ratio/=(mean.shape[0])
print yerr_ratio
corr
im = plt.imshow(corr, cmap = cmap, vmin=-1)
plt.colorbar(im)
yerr = yerr_ratio*planck_y
cov = corr*np.outer(yerr, yerr)
plt.plot(rp_points, np.sqrt(np.diag(cov))[:18]/wp_hod)
plt.xscale('log')
cov.shape
np.min(cov)
plt.imshow(cov_to_corr(shape_noise_covmat), cmap = 'viridis')
print(cov_to_corr(shape_noise_covmat))[:5, :5]
full_cov = deepcopy(cov)
full_cov[18:36][:, 18:36] = full_cov[18:36][:, 18:36]+ shape_noise_covmat
corr = cov_to_corr(cov)
full_corr = cov_to_corr(full_cov)
fig = plt.figure(figsize = (10, 5))
plt.subplot(121)
im = plt.imshow(corr, cmap = cmap, vmin = -1)
plt.colorbar(im);
plt.subplot(122)
im = plt.imshow(full_corr, cmap = cmap, vmin = -1)
plt.colorbar(im);
plt.show()
fig = plt.figure(figsize = (15, 5))
plt.subplot(131)
im = plt.imshow(corr[18:36][:, 18:36], cmap = cmap, vmin = -1)
plt.colorbar(im);
plt.subplot(132)
im = plt.imshow(cov_to_corr(shape_noise_covmat), cmap = cmap, vmin = -1)
plt.colorbar(im);
plt.subplot(133)
im = plt.imshow(full_corr[18:36][:, 18:36], cmap = cmap, vmin = -1)
plt.colorbar(im);
plt.show()
np.sqrt(np.diag(full_corr)[18:36])
plt.plot(rp_points, np.sqrt(np.diag(cov)[18:36]), label = 'Sim')
plt.plot(rp_points,np.sqrt(np.diag(shape_noise_covmat)), label = 'Shape')
plt.plot(rp_points,np.sqrt(np.diag(full_cov)[18:36]), label = 'Total')
#plt.xscale('log')
plt.loglog();
plt.legend(loc = 'best')
print full_corr[30:30+5][:, 30:30+5]
###Output
[[ 1. 0.26987367 0.15419281 0.12078679 0.09606168]
[ 0.26987367 1. 0.34785814 0.21290648 0.15221014]
[ 0.15419281 0.34785814 1. 0.43961802 0.29074286]
[ 0.12078679 0.21290648 0.43961802 1. 0.56789007]
[ 0.09606168 0.15221014 0.29074286 0.56789007 1. ]]
###Markdown
covmat = np.load('/u/ki/swmclau2/Git/pearce/bin/covmat/wp_ds_full_covmat.npy'), full_cov[:36][:,:36])
###Code
np.save('/home/users/swmclau2/Git/pearce/bin/covmat/wp_ds_full_covmat_h.npy', full_cov[:36][:, :36])
np.save('/home/users/swmclau2/Git/pearce/bin/covmat/wp_full_covmat_h.npy', full_cov[:18][:, :18])
np.save('/home/users/swmclau2/Git/pearce/bin/covmat/ds_full_covmat_h.npy', full_cov[18:36][:, 18:36])
np.save('/home/users/swmclau2/Git/pearce/bin/covmat/wp_ds_sim_covmat_h.npy', cov[:36][:, :36])
plt.plot(rp_points, rp_points*np.sqrt(np.diag(full_cov[:18, :18]) ), label = 'wp')
#plt.plot(rp_points, np.sqrt(np.diag(shape_noise_covmat) ), label = 'Shape')
#plt.plot(rp_points, np.sqrt(np.diag(cov[18:36, 18:36]) ), label = 'ds')
plt.plot(rp_points, rp_points*np.sqrt(np.diag(full_cov[18:36, 18:36]) ), label = 'ds')
plt.loglog()
plt.legend(loc='best')
print np.sqrt(np.diag(full_cov[:36][:,:36]))
print np.sqrt(np.diag(full_cov[18:36, 18:36]) )
#emu covs
emu_cov_fnames = ['/home/users/swmclau2/Git/pearce/bin/optimization/wp_hod_emu_cov_lpw.npy',
'/home/users/swmclau2/Git/pearce/bin/optimization/ds_hod_emu_cov_lpw.npy']
emu_cov = np.zeros_like(full_cov[:36][:, :36])
for i, fname in enumerate(emu_cov_fnames):
emu_cov[i*18:(i+1)*18][:, i*18:(i+1)*18] = np.load(fname)
emu_corr = cov_to_corr(emu_cov)
plt.imshow(emu_corr, cmap = cmap, vmin = -1)
full_emu_cov = full_cov[:36][:, :36] + emu_cov
print np.sqrt(np.diag(full_emu_cov[:36][:,:36]))
full_emu_corr = cov_to_corr(full_emu_cov)
plt.imshow(full_emu_corr, cmap = cmap, vmin = -1)
mean[:,:18].mean(axis=0)
print np.sqrt(np.diag(full_cov[:18, :18]) )/wp_hod#mean[:-1, :18].mean(axis=0)
print np.sqrt(np.diag(emu_cov[:18, :18]) )/wp_hod#mean[:-1, :18].mean(axis=0)
np.sqrt(np.diag(full_emu_cov[:18, :18]) )/mean[:-1, :18].mean(axis=0)
plt.plot(rp_points, np.sqrt(np.diag(full_emu_cov[:18, :18]) ), label = 'Total')
plt.plot(rp_points, np.sqrt(np.diag(cov[:18, :18]) ), ls = '--', label = 'Sim')
plt.plot(rp_points, np.sqrt(np.diag(emu_cov[:18, :18]) ), ls = ':', label = 'Emu')
plt.plot(rp_points, np.sqrt(np.diag(full_emu_cov[18:36, 18:36])) , label = 'ds', color ='r')
plt.plot(rp_points, np.sqrt(np.diag(cov[18:36, 18:36]) ), color = 'r', ls = '--')
plt.plot(rp_points, np.sqrt(np.diag(emu_cov[18:36, 18:36]) ), color = 'r', ls = ':')
plt.plot(rp_points, np.sqrt(np.diag(shape_noise_covmat)), color = 'r', ls = '-.')
#plt.ylabel('Delta Sigma Unc')
plt.xlabel('r [Mpc]')
plt.loglog()
plt.legend(loc='best')
#plt.plot(rp_points, np.sqrt(np.diag(cov[18:36, 18:36]) ), label = 'ds')
plt.plot(rp_points, np.sqrt(np.diag(full_emu_cov[18:36, 18:36])) , label = 'Total', color ='r')
plt.plot(rp_points, np.sqrt(np.diag(cov[18:36, 18:36]) ), color = 'g', label = 'Sim', ls = '--')
plt.plot(rp_points, np.sqrt(np.diag(emu_cov[18:36, 18:36]) ), color = 'r', ls = ':', label = 'Emu')
plt.plot(rp_points, np.sqrt(np.diag(shape_noise_covmat)), color = 'b', label = 'Shape Noise',ls = '-.')
plt.ylabel('Delta Sigma Unc')
plt.xlabel('r [Mpc]')
plt.loglog()
plt.legend(loc='best')
0.7**2
###Output
_____no_output_____ |
solutions/hw1/Homework 1 Final - Group 20.ipynb | ###Markdown
Homework 1 - Group 20 Ari Schencker, Daria Mileeva, Jin Lin, Manuel Veras, Zhefan Liu
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Part 2. Mean-Variance Optimization First, we import and print the data for total and excess returns.
###Code
# Total returns
path_to_data_file = 'multi_asset_etf_data.xlsx'
df_total = pd.read_excel(path_to_data_file, sheet_name='total returns')
df_total.set_index('Date',inplace=True)
df_total.head()
# Excess returns
df_excess = pd.read_excel(path_to_data_file, sheet_name='excess returns')
df_excess.set_index('Date',inplace=True)
df_excess.head()
# Descriptions
info = pd.read_excel(path_to_data_file, sheet_name='descriptions')
info.rename(columns={'Unnamed: 0':'Symbol'},inplace=True)
info.set_index('Symbol',inplace=True)
info
###Output
_____no_output_____
###Markdown
1. Summary Statistics (a) Calculate and display the mean and volatility of each asset’s excess return. (Recall we use volatility to refer to standard deviation.)
###Code
# Mean of each asset's excess return
mu = df_excess.mean()*12 # We use the built-in pandas function to compute the mean,
## and annualize it by scaling it by the factor of 12
mu
# Volatility of each asset's excess return
vol = df_excess.std()*np.sqrt(12) # Similarly to the previous code, we use the built-in function and
## scale the result by the factor of square root of 12
vol
###Output
_____no_output_____
###Markdown
(b) Which assets have the best and worst Sharpe ratios? We use the following code to find the Sharpe ratio of each asset, then we sort the resulting values in the descending order for easier interpretation of the data. As you can see from the table, the asset that has the best Sharpe ratio is SPY (Sharpe Ratio is approximately 1.18), and the asset that has the worst Sharpe ratio is DBC (Sharpe Ratio is approximately 0.06).
###Code
Sharpe = df_excess.mean()/df_excess.std() # Calculates Sharpe ratio
Sharpe = Sharpe*np.sqrt(12) # Annualizes sharpe ratio
Sharpe.sort_values(ascending=False) # Sorts sharpe ratios in the descending order
###Output
_____no_output_____
###Markdown
Below we collect all the information derived in this question in one table
###Code
table1 = pd.DataFrame({'Mean':mu, 'Vol':vol, 'Sharpe':Sharpe})
table1
###Output
_____no_output_____
###Markdown
2. Descriptive Analysis (a) Calculate the correlation matrix of the returns. Which pair has the highest correlation? And the lowest?
###Code
# This code calculates the correlation matrix of the returns
corrmat = df_excess.corr() # computes the correlation matrix
corrmat[corrmat==1] = None # ignores self correlation
sns.heatmap(corrmat) # draws the heatmap
corrmat
# Next, we find the pairs with highest and lowest correlations by sorting values
corr_rank = corrmat.unstack().sort_values().dropna()
pair_max = corr_rank.index[-1]
pair_min = corr_rank.index[0]
#Highest correlation pair
print(pair_max, 'have the highest correlation between two assets in our universe of securities.')
#Lowest correlation pair
print(pair_min, 'have the lowest correlation between two assets in our universe of securities.')
###Output
('SPY', 'IEF') have the lowest correlation between two assets in our universe of securities.
###Markdown
(b) How well have TIPS done in our sample? Have they outperformed domestic bonds? Foreign bonds? Consider the following summary statistics for TIPS, domestic and foreign bonds (IEF and BWX respectively):
###Code
table1.loc[["BWX", "TIP", "IEF"]]
###Output
_____no_output_____
###Markdown
Observe that TIP's sharpe ratio (0.82) outperformed the sharpe ratios of domestic bonds (0.59) and foreign bonds (0.28). Based on the Sharpe ratio evaluation, TIPS have done well in comparisson to bonds. (c) Based on the data, do TIPS seem to expand the investment opportunity set, implying that Harvard should consider them as a separate asset? Based on the heatmap from part a, TIPS have a relatively low correlation with most other assets from our sample. At the same time, TIPS have a high correlation with IEF. Since TIPS have a higher sharpe ratio than IEF, we recommend including TIPS in the portfolio under the class of domestic bonds. This will ensure portfolio diversification and will benefit the sharpe ratio. 3. The MV frontier
###Code
# First, we build a function to compute weights, mean, volatility and sharpe ratio for tangency portfolio
def TanZeroPort(covm, mu, scale = 1, ilabels = np.array(df_excess.columns)):
scalar1 = np.dot(np.ones(len(covm)),np.linalg.inv(covm)) #compute part of the denominator of the wt formula
scalar1 = np.dot(scalar1, np.array(mu)) #compute the denominator of the wt formula
scalar2 = np.dot(np.linalg.inv(covm), np.array(mu)) #compute the second multiplier
wt = (1/scalar1)*scalar2 #formula from lecture 1 slide 50
pmean = wt @ mu #compute mean
pvol = np.sqrt(wt @ covm @ wt)*np.sqrt(scale) #compute volatility
pSharpe = pmean/pvol #compute sharpe ratio
dic = {'Mean': pmean, 'Volatility': pvol, 'Sharpe': pSharpe}
df = pd.DataFrame(data = wt, index = ilabels, columns = ['WeightsT'])
return df, dic
###Output
_____no_output_____
###Markdown
a) Compute and display the weights of the tangency portfolios: $w^{tan}$.
###Code
#Use the function TanZeroPort from above
covm = df_excess.cov().to_numpy()
muv = np.array(mu)
[wtf, dic] = TanZeroPort(covm, muv, 12)
wtf
###Output
_____no_output_____
###Markdown
(b) Compute the mean, volatility, and Sharpe ratio for the tangency portfolio corresponding to $w^{tan}$.
###Code
# Once again, we use the function from above
print(dic)
###Output
{'Mean': 0.23776716339549783, 'Volatility': 0.10480197847812846, 'Sharpe': 2.2687278126635597}
###Markdown
4. The allocation
###Code
# Here we build a function to compute weights, mean, volatility and sharpe ratio for MV portfolio with
## given target return
def MVPortER(targetreturn, covm, mu, scale = 1, ilabels = np.array(df_excess.columns)):
scalar1 = np.dot(np.ones(len(covm)),np.linalg.inv(covm)) #compute part of the denominator of the wt formula
scalar1 = np.dot(scalar1, np.array(mu)) #compute the denominator of the wt formula
scalar2 = np.dot(np.linalg.inv(covm), np.array(mu)) #compute the second multiplier
scalar3 = np.dot(np.transpose(mu),np.linalg.inv(covm)) #compute part of the denominator of the delta formula
scalar3 = np.dot(scalar3, np.array(mu)) #compute the denominator of the delta formula
wt = (1/scalar1)*scalar2 #formula from lecture 1 slide 50
delta = (scalar1/scalar3)*targetreturn*12 #here annualize target return
wp = delta*wt
pmean = wp @ mu #compute mean
pvol = np.sqrt(wp @ covm @ wp)*np.sqrt(scale) #compute volatility
pSharpe = pmean/pvol #compute sharpe ratio
dic = {'Mean': pmean, 'Volatility': pvol, 'Sharpe': pSharpe}
df = pd.DataFrame(data = wp, index = ilabels, columns = ['WeightsMVER'])
return df, dic
###Output
_____no_output_____
###Markdown
(a) Compute and display the weights of MV portfolios with target returns of $û = .01$.
###Code
#Use the function MVPortER from above
[wp, dicp] = MVPortER(0.01, covm, muv, 12)
wp
###Output
_____no_output_____
###Markdown
(b) What is the mean, volatility, and Sharpe ratio for wp?
###Code
# Once again, we use the function from above
print(dicp)
###Output
{'Mean': 0.12000000000000002, 'Volatility': 0.05289307925357347, 'Sharpe': 2.2687278126635593}
###Markdown
(c) Discuss the allocation. In which assets is the portfolio most long? And short? Based on the weights in 4(a), portfolio is most long in IEF with a weight of 0.88, and is most short in QAI with a weight of -1.25. (d) Does this line up with which assets have the strongest Sharpe ratios? Consider the Sharpe ratios of assets in the portfolio:
###Code
Sharpe.sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Note that the portfolio is most long in IEF, that has a Sharpe ratio of 0.59, which is relatively small when compared to other assets. The portfolio is most short in QAI with a sharpe ratio of 0.58, which is the fourth lowest Sharpe ratio. Therefore, it seems that the Sharpe ratio does not affect the weights assigned to the assets in the portfolio. At the same time, IEF is the least correlated with all of the other assets, and the portfolio is the second most long in SPY which has the highest sharpe ratio and is the most negatively correlated with the asset with highest weight. The portfolio is the second most short in IYR, which is the 3rd most correlated with IEF. 5. Simple portfolios (a) Calculate the performance of the equally-weighted portfolio over the sample. Rescale the entire weighting vector to have target mean $\tilde{w} = .01$. Report its mean, volatility, and Sharpe ratio.
###Code
# This code constructs the equally-weighted portfolio
wew = (np.ones(11)*1/11) # creates an array of equal weights
meanep = wew@mu # finds the mean of the equally-weighted portfolio
wtar = wew*(0.01*12)/meanep # rescales the weights vector to get the target mean
meanwpe = wtar @ mu # computes the new mean of the portfolio
volatilitywpe = np.sqrt(wtar @ covm @ wtar)*np.sqrt(12) # computes the volatility of the portfolio
SharpeTpwpe = meanwpe/volatilitywpe # computes Sharpe ratio of the portfolio
print('Mean :', meanwpe)
print('Volatility :', volatilitywpe)
print('Sharpe Ratio:', SharpeTpwpe)
###Output
Mean : 0.12000000000000001
Volatility : 0.14355613150835042
Sharpe Ratio: 0.8359099589767074
###Markdown
(b) Calculate the performance of the "risk-parity" portfolio over the sample. Risk-parity is a term used in a variety of ways, but here we have in mind setting the weight of the portfolio to be proportional to the inverse of its full-sample volatility estimate.$$w_{i} = \frac{1}{\sigma_{i}}$$ This will give the weight vector, $w$, but you will need to rescale it to have a target mean of $\tilde{\mu} = .01$.
###Code
# This code constructs the risk-parity portfolio
wSigInv = np.array((np.reciprocal(df_excess.std()))) # computes the weights of risk-parity portfolio
meanSigInv = wSigInv@mu # finds the mean of the risk-parity portfolio
wrSigInv = wSigInv*(0.01*12)/meanSigInv # rescales the weights vector to get the target mean
meanwrSigInv = wrSigInv @ mu # computes the new mean of the portfolio
volatilitywrSigInv = np.sqrt(wrSigInv @ covm @ wrSigInv)*np.sqrt(12) # computes the volatility of the portfolio
SharpeTpSigInv = meanwrSigInv/volatilitywrSigInv # computes Sharpe ratio of the portfolio
print('MV Portfolio: ',dicp['Mean'], dicp['Volatility'], dicp['Sharpe'])
print('Equally W Portfolio: ',meanwpe, volatilitywpe, SharpeTpwpe)
print('RP Portfolio: ',meanwrSigInv, volatilitywrSigInv, SharpeTpSigInv)
###Output
MV Portfolio: 0.12000000000000002 0.05289307925357347 2.2687278126635593
Equally W Portfolio: 0.12000000000000001 0.14355613150835042 0.8359099589767074
RP Portfolio: 0.12 0.12835250527091432 0.9349252649702111
###Markdown
(c) How does these compare to the MV portfolio from problem 2.4 The portfolios have the same mean returns, but MV portfolio has the highest sharpe ratio given that it has the lowest volatility of the sets of portfolio. The RP portfolio does better than the equally weighted portfolio with higher sharpe ratio and lower volatility. 6. Out-of-Sample Performance Let's divide the sample to both compute a portfolio and then check its performance out of sample. (a) Using only data through the end of 2020, compute $w_{p}$ for $\tilde{\mu}_{p} = .01$, allocating to all 11 assets.
###Code
# First we isolate the data thorugh the end of 2020
insample = df_excess[:'2021-01-01']
insample.tail()
# Now we compute the parameters necessary for MVPortER function
muis = insample.mean()*12
muvis = np.array(muis)
covmis = insample.cov().to_numpy()
# We reuse the previously defined function to compute the weights
[wpis, dicis] = MVPortER(0.01,covmis, muvis, 12)
wpis
###Output
_____no_output_____
###Markdown
(b) Using those weights, calculate the portfolio's Sharpe ratio within that sample, through the end of 2020.
###Code
# We reuse the output of the MVPortER function
print('The portfolio\'s Sharpe ratio is ', dicis['Sharpe'], ' through the end of 2020.')
###Output
The portfolio's Sharpe ratio is 2.289852813940019 through the end of 2020.
###Markdown
(c) Again using those weights, (derived using data through 2020,) calculate the portfolio's Sharpe ratio based on performance in 2021.
###Code
# Define out-of-sample with data through 2021
outsample = df_excess['2021-01-01':]
outsample.head()
covmos = outsample.cov().to_numpy() # get the outsample covariance matrix
muout = np.array(outsample.mean()*12) # get the outsample mean
wpisx = np.transpose(np.array(wpis.to_numpy())) # use wpis from part a
outpmean = wpisx @ muout # compute the new mean
outpvol = np.sqrt(wpisx @ covmos @ np.transpose(wpisx))*np.sqrt(12) # compute the volatility
pSharpe = outpmean/outpvol # compute the Sharpe ratio
print('Mean :', float(outpmean))
print('Volatility :', float(outpvol))
print('Sharpe Ratio:', float(pSharpe))
###Output
Mean : 0.0998786401556127
Volatility : 0.06615860010125083
Sharpe Ratio: 1.509684908730775
|
ch_03/5-handling_data_issues.ipynb | ###Markdown
Handling duplicate, missing, or invalid data About the dataIn this notebook, we will using daily weather data that was taken from the [National Centers for Environmental Information (NCEI) API](https://www.ncdc.noaa.gov/cdo-web/webservices/v2) and altered to introduce many common problems faced when working with data. *Note: The NCEI is part of the National Oceanic and Atmospheric Administration (NOAA) and, as you can see from the URL for the API, this resource was created when the NCEI was called the NCDC. Should the URL for this resource change in the future, you can search for "NCEI weather API" to find the updated one.* Background on the dataData meanings:- `PRCP`: precipitation in millimeters- `SNOW`: snowfall in millimeters- `SNWD`: snow depth in millimeters- `TMAX`: maximum daily temperature in Celsius- `TMIN`: minimum daily temperature in Celsius- `TOBS`: temperature at time of observation in Celsius- `WESF`: water equivalent of snow in millimetersSome important facts to get our bearings:- According to the National Weather Service, the coldest temperature ever recorded in Central Park was -15°F (-26.1°C) on February 9, 1934: [source](https://www.weather.gov/media/okx/Climate/CentralPark/extremes.pdf) - The temperature of the Sun's photosphere is approximately 5,505°C: [source](https://en.wikipedia.org/wiki/Sun) SetupWe need to import `pandas` and read in the dirty data to get started:
###Code
import pandas as pd
df = pd.read_csv('data/dirty_data.csv')
###Output
_____no_output_____
###Markdown
Finding problematic dataA good first step is to look at some rows:
###Code
df.head()
###Output
_____no_output_____
###Markdown
Looking at summary statistics can reveal strange or missing values:
###Code
df.describe()
###Output
/home/stefaniemolin/book_env/lib/python3.7/site-packages/numpy/lib/function_base.py:3968: RuntimeWarning: invalid value encountered in multiply
x2 = take(ap, indices_above, axis=axis) * weights_above
###Markdown
The `info()` method can pinpoint missing values and wrong data types:
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 765 entries, 0 to 764
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 765 non-null object
1 station 765 non-null object
2 PRCP 765 non-null float64
3 SNOW 577 non-null float64
4 SNWD 577 non-null float64
5 TMAX 765 non-null float64
6 TMIN 765 non-null float64
7 TOBS 398 non-null float64
8 WESF 11 non-null float64
9 inclement_weather 408 non-null object
dtypes: float64(7), object(3)
memory usage: 59.9+ KB
###Markdown
We can use the `isna()`/`isnull()` method of the series to find nulls:
###Code
contain_nulls = df[
df.SNOW.isna() | df.SNWD.isna() | df.TOBS.isna()
| df.WESF.isna() | df.inclement_weather.isna()
]
contain_nulls.shape[0]
contain_nulls.head(10)
###Output
_____no_output_____
###Markdown
Note that we can't check if we have `NaN` like this:
###Code
df[df.inclement_weather == 'NaN'].shape[0]
###Output
_____no_output_____
###Markdown
This is because it is actually `np.nan`. However, notice this also doesn't work:
###Code
import numpy as np
df[df.inclement_weather == np.nan].shape[0]
###Output
_____no_output_____
###Markdown
We have to use one of the methods discussed earlier for this to work:
###Code
df[df.inclement_weather.isna()].shape[0]
###Output
_____no_output_____
###Markdown
We can find `-inf`/`inf` by comparing to `-np.inf`/`np.inf`:
###Code
df[df.SNWD.isin([-np.inf, np.inf])].shape[0]
###Output
_____no_output_____
###Markdown
Rather than do this for each column, we can write a function that will use a [dictionary comprehension](https://www.python.org/dev/peps/pep-0274/) to check all the columns for us:
###Code
def get_inf_count(df):
"""Find the number of inf/-inf values per column in the dataframe"""
return {
col: df[df[col].isin([np.inf, -np.inf])].shape[0] for col in df.columns
}
get_inf_count(df)
###Output
_____no_output_____
###Markdown
Before we can decide how to handle the infinite values of snow depth, we should look at the summary statistics for snowfall, which forms a big part in determining the snow depth:
###Code
pd.DataFrame({
'np.inf Snow Depth': df[df.SNWD == np.inf].SNOW.describe(),
'-np.inf Snow Depth': df[df.SNWD == -np.inf].SNOW.describe()
}).T
###Output
_____no_output_____
###Markdown
Let's now look into the `date` and `station` columns. We saw the `?` for station earlier, so we know that was the other unique value. However, we see that some dates are present 8 times in the data and we only have 324 days meaning we are also missing days:
###Code
df.describe(include='object')
###Output
_____no_output_____
###Markdown
We can use the `duplicated()` method to find duplicate rows:
###Code
df[df.duplicated()].shape[0]
###Output
_____no_output_____
###Markdown
The default for `keep` is `'first'` meaning it won't show the first row that the duplicated data was seen in; we can pass in `False` to see it though:
###Code
df[df.duplicated(keep=False)].shape[0]
###Output
_____no_output_____
###Markdown
We can also specify the columns to use:
###Code
df[df.duplicated(['date', 'station'])].shape[0]
###Output
_____no_output_____
###Markdown
Let's look at a few duplicates. Just in the few values we see here, we know that the top 4 are actually in the data 6 times because by default we aren't seeing their first occurrence:
###Code
df[df.duplicated()].head()
###Output
_____no_output_____
###Markdown
Mitigating Issues Handling duplicated dataSince we know we have NY weather data and noticed we only had two entries for `station`, we may decide to drop the `station` column because we are only interested in the weather data. However, when dealing with duplicate data, we need to think of the ramifications of removing it. Notice we only have data for the `WESF` column when the station is `?`:
###Code
df[df.WESF.notna()].station.unique()
###Output
_____no_output_____
###Markdown
If we determine it won't impact our analysis, we can use `drop_duplicates()` to remove them:
###Code
# 1. make the date a datetime
df.date = pd.to_datetime(df.date)
# 2. save this information for later
station_qm_wesf = df[df.station == '?'].drop_duplicates('date').set_index('date').WESF
# 3. sort ? to the bottom
df.sort_values('station', ascending=False, inplace=True)
# 4. drop duplicates based on the date column keeping the first occurrence
# which will be the valid station if it has data
df_deduped = df.drop_duplicates('date')
# 5. remove the station column because we are done with it
df_deduped = df_deduped.drop(columns='station').set_index('date').sort_index()
# 6. take valid station's WESF and fall back on station ? if it is null
df_deduped = df_deduped.assign(
WESF=lambda x: x.WESF.combine_first(station_qm_wesf)
)
df_deduped.shape
###Output
_____no_output_____
###Markdown
Here we used the `combine_first()` method to coalesce the values to the first non-null entry; this means that if we had data from both stations, we would first take the value provided by the named station and if (and only if) that station was null would we take the value from the station named `?`. The following table contains some examples of how this would play out:| station GHCND:USC00280907 | station ? | result of `combine_first()` || :---: | :---: | :---: || 1 | 17 | 1 || 1 | `NaN` | 1 || `NaN` | 17 | 17 || `NaN` | `NaN` | `NaN` |Check out the 4th row—we have `WESF` in the correct spot thanks to the index:
###Code
df_deduped.head()
###Output
_____no_output_____
###Markdown
Dealing with nullsWe could drop nulls, replace them with some arbitrary value, or impute them using the surrounding data. Each of these options may have ramifications, so we must choose wisely.We can use `dropna()` to drop rows where any column has a null value. The default options leave us hardly any data:
###Code
df_deduped.dropna().shape
###Output
_____no_output_____
###Markdown
If we pass `how='all'`, we can choose to only drop rows where everything is null, but this removes nothing:
###Code
df_deduped.dropna(how='all').shape
###Output
_____no_output_____
###Markdown
We can use just a subset of columns to determine what to drop with the `subset` argument:
###Code
df_deduped.dropna(
how='all', subset=['inclement_weather', 'SNOW', 'SNWD']
).shape
###Output
_____no_output_____
###Markdown
This can also be performed along columns, and we can also require a certain number of null values before we drop the data:
###Code
df_deduped.dropna(axis='columns', thresh=df_deduped.shape[0] * .75).columns
###Output
_____no_output_____
###Markdown
We can choose to fill in the null values instead with `fillna()`:
###Code
df_deduped.loc[:,'WESF'].fillna(0, inplace=True)
df_deduped.head()
###Output
_____no_output_____
###Markdown
At this point we have done everything we can without distorting the data. We know that we are missing dates, but if we reindex, we don't know how to fill in the `NaN` data. With the weather data, we can't assume because it snowed one day that it will snow the next or that the temperature will be the same. For this reason, note that the next few examples are just for illustrative purposes only—just because we can do something doesn't mean we should.That being said, let's try to address some of remaining issues with the temperature data. We know that when `TMAX` is the temperature of the Sun, it must be because there was no measured value, so let's replace it with `NaN`. We will also do so for `TMIN` which currently uses -40°C for its placeholder when we know that the coldest temperature ever recorded in NYC was -15°F (-26.1°C) on February 9, 1934:
###Code
df_deduped = df_deduped.assign(
TMAX=lambda x: x.TMAX.replace(5505, np.nan),
TMIN=lambda x: x.TMIN.replace(-40, np.nan),
)
###Output
_____no_output_____
###Markdown
We will also make an assumption that the temperature won't change drastically day-to-day. Note that this is actually a big assumption, but it will allow us to understand how `fillna()` works when we provide a strategy through the `method` parameter. The `fillna()` method gives us 2 options for the `method` parameter:- `'ffill'` to forward-fill- `'bfill'` to back-fill*Note that `'nearest'` is missing because we are not reindexing.*Here, we will use `'ffill'` to show how this works:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.fillna(method='ffill'),
TMIN=lambda x: x.TMIN.fillna(method='ffill')
).head()
###Output
_____no_output_____
###Markdown
We can use `np.nan_to_num()` to turn `np.nan` into 0 and `-np.inf`/`np.inf` into large negative or positive finite numbers:
###Code
df_deduped.assign(
SNWD=lambda x: np.nan_to_num(x.SNWD)
).head()
###Output
_____no_output_____
###Markdown
Depending on the data we are working with, we can use the `clip()` method as an alternative to `np.nan_to_num()`. The `clip()` method makes it possible to cap values at a specific minimum and/or maximum threshold. Since `SNWD` can't be negative, let's use `clip()` to enforce a lower bound of zero. To show how the upper bound works, let's use the value of `SNOW`:
###Code
df_deduped.assign(
SNWD=lambda x: x.SNWD.clip(0, x.SNOW)
).head()
###Output
_____no_output_____
###Markdown
We can couple `fillna()` with other types of calculations. Here we replace missing values of `TMAX` with the median of all `TMAX` values, `TMIN` with the median of all `TMIN` values, and `TOBS` to the average of the `TMAX` and `TMIN` values. Since we place `TOBS` last, we have access to the imputed values for `TMIN` and `TMAX` in the calculation:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.fillna(x.TMAX.median()),
TMIN=lambda x: x.TMIN.fillna(x.TMIN.median()),
# average of TMAX and TMIN
TOBS=lambda x: x.TOBS.fillna((x.TMAX + x.TMIN) / 2)
).head()
###Output
_____no_output_____
###Markdown
We can also use `apply()` for running the same calculation across columns. For example, let's fill all missing values with their rolling 7-day median of their values, setting the number of periods required for the calculation to 0 to ensure we don't introduce more extra `NaN` values. Rolling calculations will be covered in chapter 4, so this is a preview:
###Code
df_deduped.apply(
# rolling calculations will be covered in chapter 4, this is a rolling 7-day median
# we set min_periods (# of periods required for calculation) to 0 so we always get a result
lambda x: x.fillna(x.rolling(7, min_periods=0).median())
).head(10)
###Output
_____no_output_____
###Markdown
The last strategy we could try is interpolation with the `interpolate()` method. We specify the `method` parameter with the interpolation strategy to use. There are many options, but we will stick with the default of `'linear'`, which will treat values as evenly spaced and place missing values in the middle of existing ones. We have some missing data, so we will reindex first. Look at January 9th, which we didn't have before—the values for `TMAX`, `TMIN`, and `TOBS` are the average of values the day prior (January 8th) and the day after (January 10th):
###Code
df_deduped\
.reindex(pd.date_range('2018-01-01', '2018-12-31', freq='D'))\
.apply(lambda x: x.interpolate())\
.head(10)
###Output
_____no_output_____
###Markdown
Handling duplicate, missing, or invalid data About the dataIn this notebook, we will using daily weather data that was taken from the [National Centers for Environmental Information (NCEI) API](https://www.ncdc.noaa.gov/cdo-web/webservices/v2) and altered to introduce many common problems faced when working with data. *Note: The NCEI is part of the National Oceanic and Atmospheric Administration (NOAA) and, as you can see from the URL for the API, this resource was created when the NCEI was called the NCDC. Should the URL for this resource change in the future, you can search for the NCEI weather API to find the updated one.* Background on the dataData meanings:- `PRCP`: precipitation in millimeters- `SNOW`: snowfall in millimeters- `SNWD`: snow depth in millimeters- `TMAX`: maximum daily temperature in Celsius- `TMIN`: minimum daily temperature in Celsius- `TOBS`: temperature at time of observation in Celsius- `WESF`: water equivalent of snow in millimetersSome important facts to get our bearings:- According to the National Weather Service, the coldest temperature ever recorded in Central Park was -15°F (-26.1°C) on February 9, 1934: [source](https://www.weather.gov/media/okx/Climate/CentralPark/extremes.pdf) - The temperature of the Sun's photosphere is approximately 5,505°C: [source](https://en.wikipedia.org/wiki/Sun) SetupWe need to import `pandas` and read in the long-format data to get started:
###Code
import pandas as pd
df = pd.read_csv('data/dirty_data.csv')
###Output
_____no_output_____
###Markdown
Finding problematic dataA good first step is to look at some rows:
###Code
df.head()
###Output
_____no_output_____
###Markdown
Looking at summary statistics can reveal strange or missing values:
###Code
df.describe()
###Output
c:\users\molinstefanie\packt\venv\lib\site-packages\numpy\lib\function_base.py:3942: RuntimeWarning: invalid value encountered in multiply
x2 = take(ap, indices_above, axis=axis) * weights_above
###Markdown
The `info()` method can pinpoint missing values and wrong data types:
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 765 entries, 0 to 764
Data columns (total 10 columns):
date 765 non-null object
station 765 non-null object
PRCP 765 non-null float64
SNOW 577 non-null float64
SNWD 577 non-null float64
TMAX 765 non-null float64
TMIN 765 non-null float64
TOBS 398 non-null float64
WESF 11 non-null float64
inclement_weather 408 non-null object
dtypes: float64(7), object(3)
memory usage: 50.8+ KB
###Markdown
We can use `pd.isnull()`/`pd.isna()` or the `isna()`/`isnull()` method of the series to find nulls:
###Code
contain_nulls = df[
df.SNOW.isnull() | df.SNWD.isna()\
| pd.isnull(df.TOBS) | pd.isna(df.WESF)\
| df.inclement_weather.isna()
]
contain_nulls.shape[0]
contain_nulls.head(10)
###Output
_____no_output_____
###Markdown
Note that we can't check if we have `NaN` like this:
###Code
df[df.inclement_weather == 'NaN'].shape[0]
###Output
_____no_output_____
###Markdown
This is because it is actually `np.nan`. However, notice this also doesn't work:
###Code
import numpy as np
df[df.inclement_weather == np.nan].shape[0]
###Output
_____no_output_____
###Markdown
We have to use one of the methods discussed earlier for this to work:
###Code
df[df.inclement_weather.isna()].shape[0]
###Output
_____no_output_____
###Markdown
We can find `-inf`/`inf` by comparing to `-np.inf`/`np.inf`:
###Code
df[df.SNWD.isin([-np.inf, np.inf])].shape[0]
###Output
_____no_output_____
###Markdown
Rather than do this for each column, we can write a function that will use a [dictionary comprehension](https://www.python.org/dev/peps/pep-0274/) to check all the columns for us:
###Code
import numpy as np
def get_inf_count(df):
"""Find the number of inf/-inf values per column in the dataframe"""
return {
col : df[df[col].isin([np.inf, -np.inf])].shape[0] for col in df.columns
}
get_inf_count(df)
###Output
_____no_output_____
###Markdown
Before we can decide how to handle the infinite values of snow depth, we should look at the summary statistics for snowfall which form a big part in determining the snow depth:
###Code
pd.DataFrame({
'np.inf Snow Depth': df[df.SNWD == np.inf].SNOW.describe(),
'-np.inf Snow Depth': df[df.SNWD == -np.inf].SNOW.describe()
}).T
###Output
_____no_output_____
###Markdown
Let's now look into the `date` and `station` columns. We saw the `?` for station earlier, so we know that was the other unique value. However, we see that some dates are present 8 times in the data and we only have 324 days meaning we are also missing days:
###Code
df.describe(include='object')
###Output
_____no_output_____
###Markdown
We can use the `duplicated()` method to find duplicate rows:
###Code
df[df.duplicated()].shape[0]
###Output
_____no_output_____
###Markdown
The default for `keep` is `'first'` meaning it won't show the first row that the duplicated data was seen in; we can pass in `False` to see it though:
###Code
df[df.duplicated(keep=False)].shape[0]
###Output
_____no_output_____
###Markdown
We can also specify the columns to use:
###Code
df[df.duplicated(['date', 'station'])].shape[0]
###Output
_____no_output_____
###Markdown
Let's look at a few duplicates. Just in the few values we see here, we know that the top 4 are actually in the data 6 times because by default we aren't seeing their first occurrence:
###Code
df[df.duplicated()].head()
###Output
_____no_output_____
###Markdown
Mitigating Issues Handling duplicated dataSince we know we have NY weather data and noticed we only had two entries for `station`, we may decide to drop the `station` column because we are only interested in the weather data. However, when dealing with duplicate data, we need to think of the ramifications of removing it. Notice we only have data for the `WESF` column when the station is `?`:
###Code
df[df.WESF.notna()].station.unique()
###Output
_____no_output_____
###Markdown
If we determine it won't impact our analysis, we can use `drop_duplicates()` to remove them:
###Code
# save this information for later
station_qm_wesf = df[df.station == '?'].WESF
# sort ? to the bottom
df.sort_values('station', ascending=False, inplace=True)
# drop duplicates based on the date column keeping the first occurrence
# which will be the valid station if it has data
df_deduped = df.drop_duplicates('date').drop(
# remove the station column because we are done with it
# and WESF because we need to replace it later
columns=['station', 'WESF']
).sort_values('date').assign( # sort by the date
# add back the WESF column which will be properly matched because of the index
WESF=station_qm_wesf
)
df_deduped.shape
###Output
_____no_output_____
###Markdown
Check out the 4th row, we have `WESF` in the correct spot thanks to the index:
###Code
df_deduped.head()
###Output
_____no_output_____
###Markdown
Dealing with nullsWe could drop nulls, replace them with some arbitrary value, or impute them using the surrounding data. Each of these options may have ramifications, so we must choose wisely.We can use `dropna()` to drop rows where any column has a null value. The default options leave us without data:
###Code
df_deduped.dropna().shape
###Output
_____no_output_____
###Markdown
If we pass `how='all'`, we can choose to only drop rows where everything is null, but this removes nothing:
###Code
df_deduped.dropna(how='all').shape
###Output
_____no_output_____
###Markdown
We can use just a subset of columns to determine what to drop with the `subset` argument:
###Code
df_deduped.dropna(
how='all', subset=['inclement_weather', 'SNOW', 'SNWD']
).shape
###Output
_____no_output_____
###Markdown
This can also be performed along columns, and we can also require a certain number of null values before we drop the data:
###Code
df_deduped.dropna(axis='columns', thresh=df_deduped.shape[0]*.75).columns
###Output
_____no_output_____
###Markdown
We can choose to fill in the null values instead with `fillna()`:
###Code
df_deduped.loc[:,'WESF'].fillna(0, inplace=True)
df_deduped.head()
###Output
_____no_output_____
###Markdown
At this point we have done every we can without distorting the data. We know that we are missing dates, but if we reindex, we don't know how to fill in the NaN data. With the weather data, we can't assume because it snowed one day that it will snow the next or that the temperature will be the same. For this reason, note that the next few examples are just for illustrative purposes only—just because we can do something doesn't mean we should.That being said, let's try to address some of remaining issues with the temperature data. We know that when `TMAX` is the temperature of the Sun, it must be because there was no measured value, so let's replace it with `NaN` and then we will make an assumption that the temperature won't change drastically day-to-day. Note that this is actually a big assumption, but it will allow us to understand how `fillna()` works when we provide a strategy through the `method` parameter. We will also do this for `TMIN` which currently uses -40°C for its placeholder when we know that the coldest temperature ever recorded in NYC was -15°F (-26.1°C) on February 9, 1934.The `fillna()` method gives us 2 options for the `method` parameter:- 'ffill' to forward fill- 'bfill' to back fill*Note that `'nearest'` is missing because we are not reindexing.*Here, we will use `'ffill'` to show how this works:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.replace(5505, np.nan).fillna(method='ffill'),
TMIN=lambda x: x.TMIN.replace(-40, np.nan).fillna(method='ffill')
).head()
###Output
_____no_output_____
###Markdown
We can use `np.nan_to_num()` to turn `np.nan` into 0 and `-np.inf`/`np.inf` into large negative or positive finite numbers:
###Code
df_deduped.assign(
SNWD=lambda x: np.nan_to_num(x.SNWD)
).head()
###Output
_____no_output_____
###Markdown
We can couple `fillna()` with other types of calculations for interpolation. Here we replace missing values of `TMAX` with the median of all `TMAX` values, `TMIN` with the median of all `TMIN` values, and `TOBS` to the average of the `TMAX` and `TMIN` values. Since we place `TOBS` last, we have access to the imputed values for `TMIN` and `TMAX` in the calculation. **WARNING: the text has a typo and fills in TMAX with TMIN's median, the below is correct.**:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.replace(5505, np.nan).fillna(x.TMAX.median()),
TMIN=lambda x: x.TMIN.replace(-40, np.nan).fillna(x.TMIN.median()),
# average of TMAX and TMIN
TOBS=lambda x: x.TOBS.fillna((x.TMAX + x.TMIN) / 2)
).head()
###Output
_____no_output_____
###Markdown
We can also use `apply()` for running the same calculation across columns. For example, let's fill all missing values with their rolling 7 day median of their values, setting the number of periods required for the calculation to 0 to ensure we don't introduce more extra `NaN` values. (Rolling calculations will be covered in [chapter 4](https://github.com/stefmolin/Hands-On-Data-Analysis-with-Pandas/tree/master/ch_04).) We need to set the `date` column as the index so `apply()` doesn't try to take the rolling 7 day median of the date:
###Code
df_deduped.assign(
# make TMAX and TMIN NaN where appropriate
TMAX=lambda x: x.TMAX.replace(5505, np.nan),
TMIN=lambda x: x.TMIN.replace(-40, np.nan)
).set_index('date').apply(
# rolling calculations will be covered in chapter 4, this is a rolling 7 day median
# we set min_periods (# of periods required for calculation) to 0 so we always get a result
lambda x: x.fillna(x.rolling(7, min_periods=0).median())
).head(10)
###Output
_____no_output_____
###Markdown
The last strategy we could try is interpolation with the `interpolate()` method. We specify the `method` parameter with the interpolation strategy to use. There are many options, but we will stick with the default of `'linear'`, which will treat values as evenly spaced and place missing values in the middle of existing ones. We have some missing data, so we will reindex first. Look at January 9th, which we didn't have before—the values for `TMAX`, `TMIN`, and `TOBS` are the average of values the day prior (January 8th) and the day after (January 10th):
###Code
df_deduped.assign(
# make TMAX and TMIN NaN where appropriate
TMAX=lambda x: x.TMAX.replace(5505, np.nan),
TMIN=lambda x: x.TMIN.replace(-40, np.nan),
date=lambda x: pd.to_datetime(x.date)
).set_index('date').reindex(
pd.date_range('2018-01-01', '2018-12-31', freq='D')
).apply(
lambda x: x.interpolate()
).head(10)
###Output
_____no_output_____
###Markdown
Handling duplicate, missing, or invalid data About the dataIn this notebook, we will using daily weather data that was taken from the [National Centers for Environmental Information (NCEI) API](https://www.ncdc.noaa.gov/cdo-web/webservices/v2) and altered to introduce many common problems faced when working with data. *Note: The NCEI is part of the National Oceanic and Atmospheric Administration (NOAA) and, as you can see from the URL for the API, this resource was created when the NCEI was called the NCDC. Should the URL for this resource change in the future, you can search for "NCEI weather API" to find the updated one.* Background on the dataData meanings:- `PRCP`: precipitation in millimeters- `SNOW`: snowfall in millimeters- `SNWD`: snow depth in millimeters- `TMAX`: maximum daily temperature in Celsius- `TMIN`: minimum daily temperature in Celsius- `TOBS`: temperature at time of observation in Celsius- `WESF`: water equivalent of snow in millimetersSome important facts to get our bearings:- According to the National Weather Service, the coldest temperature ever recorded in Central Park was -15°F (-26.1°C) on February 9, 1934: [source](https://www.weather.gov/media/okx/Climate/CentralPark/extremes.pdf) - The temperature of the Sun's photosphere is approximately 5,505°C: [source](https://en.wikipedia.org/wiki/Sun) SetupWe need to import `pandas` and read in the dirty data to get started:
###Code
import pandas as pd
df = pd.read_csv('data/dirty_data.csv')
###Output
_____no_output_____
###Markdown
Finding problematic dataA good first step is to look at some rows:
###Code
df.head()
###Output
_____no_output_____
###Markdown
Looking at summary statistics can reveal strange or missing values:
###Code
df.describe()
###Output
/home/user/github/will-i-amv-books/Hands-On-Data-Analysis-with-Pandas-2nd-edition/env/lib/python3.8/site-packages/numpy/lib/function_base.py:3968: RuntimeWarning: invalid value encountered in multiply
x2 = take(ap, indices_above, axis=axis) * weights_above
###Markdown
The `info()` method can pinpoint missing values and wrong data types:
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 765 entries, 0 to 764
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 765 non-null object
1 station 765 non-null object
2 PRCP 765 non-null float64
3 SNOW 577 non-null float64
4 SNWD 577 non-null float64
5 TMAX 765 non-null float64
6 TMIN 765 non-null float64
7 TOBS 398 non-null float64
8 WESF 11 non-null float64
9 inclement_weather 408 non-null object
dtypes: float64(7), object(3)
memory usage: 59.9+ KB
###Markdown
We can use the `isna()`/`isnull()` method of the series to find nulls:
###Code
contain_nulls = df[
df.SNOW.isna() | \
df.SNWD.isna() | \
df.TOBS.isna() | \
df.WESF.isna() | \
df.inclement_weather.isna()
]
contain_nulls.shape[0]
contain_nulls.head(10)
###Output
_____no_output_____
###Markdown
Note that we can't check if we have `NaN` like this:
###Code
df[df.inclement_weather == 'NaN'].shape[0]
###Output
_____no_output_____
###Markdown
This is because it is actually `np.nan`. However, notice this also doesn't work:
###Code
import numpy as np
df[df.inclement_weather == np.nan].shape[0]
###Output
_____no_output_____
###Markdown
We have to use one of the methods discussed earlier for this to work:
###Code
df[df.inclement_weather.isna()].shape[0]
###Output
_____no_output_____
###Markdown
We can find `-inf`/`inf` by comparing to `-np.inf`/`np.inf`:
###Code
df[df.SNWD.isin([-np.inf, np.inf])].shape[0]
###Output
_____no_output_____
###Markdown
Rather than do this for each column, we can write a function that will use a [dictionary comprehension](https://www.python.org/dev/peps/pep-0274/) to check all the columns for us:
###Code
def get_inf_count(df):
"""Find the number of inf/-inf values per column in the dataframe"""
return {
col: df[df[col].isin([np.inf, -np.inf])].shape[0]
for col in df.columns
}
get_inf_count(df)
###Output
_____no_output_____
###Markdown
Before we can decide how to handle the infinite values of snow depth, we should look at the summary statistics for snowfall, which forms a big part in determining the snow depth:
###Code
pd\
.DataFrame({
'np.inf Snow Depth': df[df.SNWD == np.inf].SNOW.describe(),
'-np.inf Snow Depth': df[df.SNWD == -np.inf].SNOW.describe()
})\
.T
###Output
_____no_output_____
###Markdown
Let's now look into the `date` and `station` columns. We saw the `?` for station earlier, so we know that was the other unique value. However, we see that some dates are present 8 times in the data and we only have 324 days meaning we are also missing days:
###Code
df.describe(include='object')
###Output
_____no_output_____
###Markdown
We can use the `duplicated()` method to find duplicate rows:
###Code
df[df.duplicated()].shape[0]
###Output
_____no_output_____
###Markdown
The default for `keep` is `'first'` meaning it won't show the first row that the duplicated data was seen in; we can pass in `False` to see it though:
###Code
df[df.duplicated(keep=False)].shape[0]
###Output
_____no_output_____
###Markdown
We can also specify the columns to use:
###Code
df[df.duplicated(['date', 'station'])].shape[0]
###Output
_____no_output_____
###Markdown
Let's look at a few duplicates. Just in the few values we see here, we know that the top 4 are actually in the data 6 times because by default we aren't seeing their first occurrence:
###Code
df[df.duplicated()].head()
###Output
_____no_output_____
###Markdown
Mitigating Issues Handling duplicated dataSince we know we have NY weather data and noticed we only had two entries for `station`, we may decide to drop the `station` column because we are only interested in the weather data. However, when dealing with duplicate data, we need to think of the ramifications of removing it. Notice we only have data for the `WESF` column when the station is `?`:
###Code
df[df.WESF.notna()]['station'].unique()
###Output
_____no_output_____
###Markdown
If we determine it won't impact our analysis, we can use `drop_duplicates()` to remove them:
###Code
# 1. make the date a datetime
df.date = pd.to_datetime(df.date)
# 2. save this information for later
station_qm_wesf = df[df.station == '?']\
.drop_duplicates('date')\
.set_index('date')\
.WESF
# 3. sort ? to the bottom
df.sort_values(
'station',
ascending=False,
inplace=True
)
# 4. drop duplicates based on the date column keeping the first occurrence
# which will be the valid station if it has data
df_deduped = df.drop_duplicates('date')
# 5. remove the station column because we are done with it
df_deduped = df_deduped\
.drop(columns='station')\
.set_index('date')\
.sort_index()
# 6. take valid station's WESF and fall back on station ? if it is null
df_deduped = df_deduped.assign(
WESF=lambda x: x.WESF.combine_first(station_qm_wesf)
)
df_deduped.shape
###Output
_____no_output_____
###Markdown
Here we used the `combine_first()` method to coalesce the values to the first non-null entry; this means that if we had data from both stations, we would first take the value provided by the named station and if (and only if) that station was null would we take the value from the station named `?`. The following table contains some examples of how this would play out:| station GHCND:USC00280907 | station ? | result of `combine_first()` || :---: | :---: | :---: || 1 | 17 | 1 || 1 | `NaN` | 1 || `NaN` | 17 | 17 || `NaN` | `NaN` | `NaN` |Check out the 4th row—we have `WESF` in the correct spot thanks to the index:
###Code
df_deduped.head()
###Output
_____no_output_____
###Markdown
Dealing with nullsWe could drop nulls, replace them with some arbitrary value, or impute them using the surrounding data. Each of these options may have ramifications, so we must choose wisely.We can use `dropna()` to drop rows where any column has a null value. The default options leave us hardly any data:
###Code
df_deduped.dropna().shape
###Output
_____no_output_____
###Markdown
If we pass `how='all'`, we can choose to only drop rows where everything is null, but this removes nothing:
###Code
df_deduped.dropna(how='all').shape
###Output
_____no_output_____
###Markdown
We can use just a subset of columns to determine what to drop with the `subset` argument:
###Code
df_deduped\
.dropna(
how='all',
subset=['inclement_weather', 'SNOW', 'SNWD']
)\
.shape
###Output
_____no_output_____
###Markdown
This can also be performed along columns, and we can also require a certain number of null values before we drop the data:
###Code
df_deduped\
.dropna(
axis='columns',
thresh=df_deduped.shape[0] * 0.75
)\
.columns
###Output
_____no_output_____
###Markdown
We can choose to fill in the null values instead with `fillna()`:
###Code
df_deduped.loc[:,'WESF'].fillna(0, inplace=True)
df_deduped.head()
###Output
_____no_output_____
###Markdown
At this point we have done everything we can without distorting the data. We know that we are missing dates, but if we reindex, we don't know how to fill in the `NaN` data. With the weather data, we can't assume because it snowed one day that it will snow the next or that the temperature will be the same. For this reason, note that the next few examples are just for illustrative purposes only—just because we can do something doesn't mean we should.That being said, let's try to address some of remaining issues with the temperature data. We know that when `TMAX` is the temperature of the Sun, it must be because there was no measured value, so let's replace it with `NaN`. We will also do so for `TMIN` which currently uses -40°C for its placeholder when we know that the coldest temperature ever recorded in NYC was -15°F (-26.1°C) on February 9, 1934:
###Code
df_deduped = df_deduped\
.assign(
TMAX=lambda x: x.TMAX.replace(5505, np.nan),
TMIN=lambda x: x.TMIN.replace(-40, np.nan),
)
###Output
_____no_output_____
###Markdown
We will also make an assumption that the temperature won't change drastically day-to-day. Note that this is actually a big assumption, but it will allow us to understand how `fillna()` works when we provide a strategy through the `method` parameter. The `fillna()` method gives us 2 options for the `method` parameter:- `'ffill'` to forward-fill- `'bfill'` to back-fill*Note that `'nearest'` is missing because we are not reindexing.*Here, we will use `'ffill'` to show how this works:
###Code
df_deduped\
.assign(
TMAX=lambda x: x.TMAX.fillna(method='ffill'),
TMIN=lambda x: x.TMIN.fillna(method='ffill')
)\
.head()
###Output
_____no_output_____
###Markdown
We can use `np.nan_to_num()` to turn `np.nan` into 0 and `-np.inf`/`np.inf` into large negative or positive finite numbers:
###Code
df_deduped\
.assign(SNWD=lambda x: np.nan_to_num(x.SNWD))\
.head()
###Output
_____no_output_____
###Markdown
Depending on the data we are working with, we can use the `clip()` method as an alternative to `np.nan_to_num()`. The `clip()` method makes it possible to cap values at a specific minimum and/or maximum threshold. Since `SNWD` can't be negative, let's use `clip()` to enforce a lower bound of zero. To show how the upper bound works, let's use the value of `SNOW`:
###Code
df_deduped\
.assign(SNWD=lambda x: x.SNWD.clip(0, x.SNOW))\
.head()
###Output
_____no_output_____
###Markdown
We can couple `fillna()` with other types of calculations. Here we replace missing values of `TMAX` with the median of all `TMAX` values, `TMIN` with the median of all `TMIN` values, and `TOBS` to the average of the `TMAX` and `TMIN` values. Since we place `TOBS` last, we have access to the imputed values for `TMIN` and `TMAX` in the calculation:
###Code
df_deduped\
.assign(
TMAX=lambda x: x.TMAX.fillna(x.TMAX.median()),
TMIN=lambda x: x.TMIN.fillna(x.TMIN.median()),
# average of TMAX and TMIN
TOBS=lambda x: x.TOBS.fillna((x.TMAX + x.TMIN) / 2)
)\
.head()
###Output
_____no_output_____
###Markdown
We can also use `apply()` for running the same calculation across columns. For example, let's fill all missing values with their rolling 7-day median of their values, setting the number of periods required for the calculation to 0 to ensure we don't introduce more extra `NaN` values. Rolling calculations will be covered in chapter 4, so this is a preview:
###Code
# Rolling calculations will be covered in chapter 4,
# this is a rolling 7-day median.
# We set min_periods (# of periods required for calculation)
# to 0 so we always get a result
df_deduped\
.apply(lambda x: x.fillna(x.rolling(7, min_periods=0).median()))\
.head(10)
###Output
_____no_output_____
###Markdown
The last strategy we could try is interpolation with the `interpolate()` method. We specify the `method` parameter with the interpolation strategy to use. There are many options, but we will stick with the default of `'linear'`, which will treat values as evenly spaced and place missing values in the middle of existing ones. We have some missing data, so we will reindex first. Look at January 9th, which we didn't have before—the values for `TMAX`, `TMIN`, and `TOBS` are the average of values the day prior (January 8th) and the day after (January 10th):
###Code
df_deduped\
.reindex(pd.date_range('2018-01-01', '2018-12-31', freq='D'))\
.apply(lambda x: x.interpolate())\
.head(10)
###Output
_____no_output_____
###Markdown
Handling duplicate, missing, or invalid data About the dataIn this notebook, we will using daily weather data that was taken from the [National Centers for Environmental Information (NCEI) API](https://www.ncdc.noaa.gov/cdo-web/webservices/v2) and altered to introduce many common problems faced when working with data. *Note: The NCEI is part of the National Oceanic and Atmospheric Administration (NOAA) and, as you can see from the URL for the API, this resource was created when the NCEI was called the NCDC. Should the URL for this resource change in the future, you can search for "NCEI weather API" to find the updated one.* Background on the dataData meanings:- `PRCP`: precipitation in millimeters- `SNOW`: snowfall in millimeters- `SNWD`: snow depth in millimeters- `TMAX`: maximum daily temperature in Celsius- `TMIN`: minimum daily temperature in Celsius- `TOBS`: temperature at time of observation in Celsius- `WESF`: water equivalent of snow in millimetersSome important facts to get our bearings:- According to the National Weather Service, the coldest temperature ever recorded in Central Park was -15°F (-26.1°C) on February 9, 1934: [source](https://www.weather.gov/media/okx/Climate/CentralPark/extremes.pdf) - The temperature of the Sun's photosphere is approximately 5,505°C: [source](https://en.wikipedia.org/wiki/Sun) SetupWe need to import `pandas` and read in the dirty data to get started:
###Code
import pandas as pd
df = pd.read_csv('data/dirty_data.csv')
###Output
_____no_output_____
###Markdown
Finding problematic dataA good first step is to look at some rows:
###Code
df.head()
###Output
_____no_output_____
###Markdown
Looking at summary statistics can reveal strange or missing values:
###Code
df.describe()
###Output
/home/dev/SelfEd/Hands-On-Data-Analysis-with-Pandas-2nd-edition/book_env/lib/python3.8/site-packages/numpy/lib/function_base.py:3968: RuntimeWarning: invalid value encountered in multiply
x2 = take(ap, indices_above, axis=axis) * weights_above
###Markdown
The `info()` method can pinpoint missing values and wrong data types:
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 765 entries, 0 to 764
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 765 non-null object
1 station 765 non-null object
2 PRCP 765 non-null float64
3 SNOW 577 non-null float64
4 SNWD 577 non-null float64
5 TMAX 765 non-null float64
6 TMIN 765 non-null float64
7 TOBS 398 non-null float64
8 WESF 11 non-null float64
9 inclement_weather 408 non-null object
dtypes: float64(7), object(3)
memory usage: 59.9+ KB
###Markdown
We can use the `isna()`/`isnull()` method of the series to find nulls:
###Code
contain_nulls = df[
df.SNOW.isna() | df.SNWD.isna() | df.TOBS.isna()
| df.WESF.isna() | df.inclement_weather.isna()
]
contain_nulls.shape[0]
contain_nulls.head(10)
###Output
_____no_output_____
###Markdown
Note that we can't check if we have `NaN` like this:
###Code
df[df.inclement_weather == 'NaN'].shape[0]
###Output
_____no_output_____
###Markdown
This is because it is actually `np.nan`. However, notice this also doesn't work:
###Code
import numpy as np
df[df.inclement_weather == np.nan].shape[0]
###Output
_____no_output_____
###Markdown
We have to use one of the methods discussed earlier for this to work:
###Code
df[df.inclement_weather.isna()].shape[0]
###Output
_____no_output_____
###Markdown
We can find `-inf`/`inf` by comparing to `-np.inf`/`np.inf`:
###Code
df[df.SNWD.isin([-np.inf, np.inf])].shape[0]
###Output
_____no_output_____
###Markdown
Rather than do this for each column, we can write a function that will use a [dictionary comprehension](https://www.python.org/dev/peps/pep-0274/) to check all the columns for us:
###Code
def get_inf_count(df):
"""Find the number of inf/-inf values per column in the dataframe"""
return {
col: df[df[col].isin([np.inf, -np.inf])].shape[0] for col in df.columns
}
get_inf_count(df)
###Output
_____no_output_____
###Markdown
Before we can decide how to handle the infinite values of snow depth, we should look at the summary statistics for snowfall, which forms a big part in determining the snow depth:
###Code
pd.DataFrame({
'np.inf Snow Depth': df[df.SNWD == np.inf].SNOW.describe(),
'-np.inf Snow Depth': df[df.SNWD == -np.inf].SNOW.describe()
}).T
###Output
_____no_output_____
###Markdown
Let's now look into the `date` and `station` columns. We saw the `?` for station earlier, so we know that was the other unique value. However, we see that some dates are present 8 times in the data and we only have 324 days meaning we are also missing days:
###Code
df.describe(include='object')
###Output
_____no_output_____
###Markdown
We can use the `duplicated()` method to find duplicate rows:
###Code
df[df.duplicated()].shape[0]
###Output
_____no_output_____
###Markdown
The default for `keep` is `'first'` meaning it won't show the first row that the duplicated data was seen in; we can pass in `False` to see it though:
###Code
df[df.duplicated(keep=False)].shape[0]
###Output
_____no_output_____
###Markdown
We can also specify the columns to use:
###Code
df[df.duplicated(['date', 'station'])].shape[0]
###Output
_____no_output_____
###Markdown
Let's look at a few duplicates. Just in the few values we see here, we know that the top 4 are actually in the data 6 times because by default we aren't seeing their first occurrence:
###Code
df[df.duplicated()].head()
###Output
_____no_output_____
###Markdown
Mitigating Issues Handling duplicated dataSince we know we have NY weather data and noticed we only had two entries for `station`, we may decide to drop the `station` column because we are only interested in the weather data. However, when dealing with duplicate data, we need to think of the ramifications of removing it. Notice we only have data for the `WESF` column when the station is `?`:
###Code
df[df.WESF.notna()].station.unique()
###Output
_____no_output_____
###Markdown
If we determine it won't impact our analysis, we can use `drop_duplicates()` to remove them:
###Code
# 1. make the date a datetime
df.date = pd.to_datetime(df.date)
# 2. save this information for later
station_qm_wesf = df[df.station == '?'].drop_duplicates('date').set_index('date').WESF
# 3. sort ? to the bottom
df.sort_values('station', ascending=False, inplace=True)
# 4. drop duplicates based on the date column keeping the first occurrence
# which will be the valid station if it has data
df_deduped = df.drop_duplicates('date')
# 5. remove the station column because we are done with it
df_deduped = df_deduped.drop(columns='station').set_index('date').sort_index()
# 6. take valid station's WESF and fall back on station ? if it is null
df_deduped = df_deduped.assign(
WESF=lambda x: x.WESF.combine_first(station_qm_wesf)
)
df_deduped.shape
###Output
_____no_output_____
###Markdown
Here we used the `combine_first()` method to coalesce the values to the first non-null entry; this means that if we had data from both stations, we would first take the value provided by the named station and if (and only if) that station was null would we take the value from the station named `?`. The following table contains some examples of how this would play out:| station GHCND:USC00280907 | station ? | result of `combine_first()` || :---: | :---: | :---: || 1 | 17 | 1 || 1 | `NaN` | 1 || `NaN` | 17 | 17 || `NaN` | `NaN` | `NaN` |Check out the 4th row—we have `WESF` in the correct spot thanks to the index:
###Code
df_deduped.head()
###Output
_____no_output_____
###Markdown
Dealing with nullsWe could drop nulls, replace them with some arbitrary value, or impute them using the surrounding data. Each of these options may have ramifications, so we must choose wisely.We can use `dropna()` to drop rows where any column has a null value. The default options leave us hardly any data:
###Code
df_deduped.dropna().shape
###Output
_____no_output_____
###Markdown
If we pass `how='all'`, we can choose to only drop rows where everything is null, but this removes nothing:
###Code
df_deduped.dropna(how='all').shape
###Output
_____no_output_____
###Markdown
We can use just a subset of columns to determine what to drop with the `subset` argument:
###Code
df_deduped.dropna(
how='all', subset=['inclement_weather', 'SNOW', 'SNWD']
).shape
###Output
_____no_output_____
###Markdown
This can also be performed along columns, and we can also require a certain number of null values before we drop the data:
###Code
df_deduped.dropna(axis='columns', thresh=df_deduped.shape[0] * .75).columns
###Output
_____no_output_____
###Markdown
We can choose to fill in the null values instead with `fillna()`:
###Code
df_deduped.loc[:,'WESF'].fillna(0, inplace=True)
df_deduped.head()
###Output
_____no_output_____
###Markdown
At this point we have done everything we can without distorting the data. We know that we are missing dates, but if we reindex, we don't know how to fill in the `NaN` data. With the weather data, we can't assume because it snowed one day that it will snow the next or that the temperature will be the same. For this reason, note that the next few examples are just for illustrative purposes only—just because we can do something doesn't mean we should.That being said, let's try to address some of remaining issues with the temperature data. We know that when `TMAX` is the temperature of the Sun, it must be because there was no measured value, so let's replace it with `NaN`. We will also do so for `TMIN` which currently uses -40°C for its placeholder when we know that the coldest temperature ever recorded in NYC was -15°F (-26.1°C) on February 9, 1934:
###Code
df_deduped = df_deduped.assign(
TMAX=lambda x: x.TMAX.replace(5505, np.nan),
TMIN=lambda x: x.TMIN.replace(-40, np.nan),
)
###Output
_____no_output_____
###Markdown
We will also make an assumption that the temperature won't change drastically day-to-day. Note that this is actually a big assumption, but it will allow us to understand how `fillna()` works when we provide a strategy through the `method` parameter. The `fillna()` method gives us 2 options for the `method` parameter:- `'ffill'` to forward-fill- `'bfill'` to back-fill*Note that `'nearest'` is missing because we are not reindexing.*Here, we will use `'ffill'` to show how this works:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.fillna(method='ffill'),
TMIN=lambda x: x.TMIN.fillna(method='ffill')
).head()
###Output
_____no_output_____
###Markdown
We can use `np.nan_to_num()` to turn `np.nan` into 0 and `-np.inf`/`np.inf` into large negative or positive finite numbers:
###Code
df_deduped.assign(
SNWD=lambda x: np.nan_to_num(x.SNWD)
).head()
###Output
_____no_output_____
###Markdown
Depending on the data we are working with, we can use the `clip()` method as an alternative to `np.nan_to_num()`. The `clip()` method makes it possible to cap values at a specific minimum and/or maximum threshold. Since `SNWD` can't be negative, let's use `clip()` to enforce a lower bound of zero. To show how the upper bound works, let's use the value of `SNOW`:
###Code
df_deduped.assign(
SNWD=lambda x: x.SNWD.clip(0, x.SNOW)
).head()
###Output
_____no_output_____
###Markdown
We can couple `fillna()` with other types of calculations. Here we replace missing values of `TMAX` with the median of all `TMAX` values, `TMIN` with the median of all `TMIN` values, and `TOBS` to the average of the `TMAX` and `TMIN` values. Since we place `TOBS` last, we have access to the imputed values for `TMIN` and `TMAX` in the calculation:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.fillna(x.TMAX.median()),
TMIN=lambda x: x.TMIN.fillna(x.TMIN.median()),
# average of TMAX and TMIN
TOBS=lambda x: x.TOBS.fillna((x.TMAX + x.TMIN) / 2)
).head()
###Output
_____no_output_____
###Markdown
We can also use `apply()` for running the same calculation across columns. For example, let's fill all missing values with their rolling 7-day median of their values, setting the number of periods required for the calculation to 0 to ensure we don't introduce more extra `NaN` values. Rolling calculations will be covered in chapter 4, so this is a preview:
###Code
df_deduped.apply(
# rolling calculations will be covered in chapter 4, this is a rolling 7-day median
# we set min_periods (# of periods required for calculation) to 0 so we always get a result
lambda x: x.fillna(x.rolling(7, min_periods=0).median())
).head(10)
###Output
_____no_output_____
###Markdown
The last strategy we could try is interpolation with the `interpolate()` method. We specify the `method` parameter with the interpolation strategy to use. There are many options, but we will stick with the default of `'linear'`, which will treat values as evenly spaced and place missing values in the middle of existing ones. We have some missing data, so we will reindex first. Look at January 9th, which we didn't have before—the values for `TMAX`, `TMIN`, and `TOBS` are the average of values the day prior (January 8th) and the day after (January 10th):
###Code
df_deduped\
.reindex(pd.date_range('2018-01-01', '2018-12-31', freq='D'))\
.apply(lambda x: x.interpolate())\
.head(10)
###Output
_____no_output_____
###Markdown
Handling duplicate, missing, or invalid data About the dataIn this notebook, we will using daily weather data that was taken from the [National Centers for Environmental Information (NCEI) API](https://www.ncdc.noaa.gov/cdo-web/webservices/v2) and altered to introduce many common problems faced when working with data. *Note: The NCEI is part of the National Oceanic and Atmospheric Administration (NOAA) and, as you can see from the URL for the API, this resource was created when the NCEI was called the NCDC. Should the URL for this resource change in the future, you can search for the NCEI weather API to find the updated one.* Background on the dataData meanings:- `PRCP`: precipitation in millimeters- `SNOW`: snowfall in millimeters- `SNWD`: snow depth in millimeters- `TMAX`: maximum daily temperature in Celsius- `TMIN`: minimum daily temperature in Celsius- `TOBS`: temperature at time of observation in Celsius- `WESF`: water equivalent of snow in millimetersSome important facts to get our bearings:- According to the National Weather Service, the coldest temperature ever recorded in Central Park was -15°F (-26.1°C) on February 9, 1934: [source](https://www.weather.gov/media/okx/Climate/CentralPark/extremes.pdf) - The temperature of the Sun's photosphere is approximately 5,505°C: [source](https://en.wikipedia.org/wiki/Sun) SetupWe need to import `pandas` and read in the long-format data to get started:
###Code
import pandas as pd
df = pd.read_csv('data/dirty_data.csv')
###Output
_____no_output_____
###Markdown
Finding problematic dataA good first step is to look at some rows:
###Code
df.head()
###Output
_____no_output_____
###Markdown
Looking at summary statistics can reveal strange or missing values:
###Code
df.describe()
###Output
c:\users\molinstefanie\packt\venv\lib\site-packages\numpy\lib\function_base.py:3942: RuntimeWarning: invalid value encountered in multiply
x2 = take(ap, indices_above, axis=axis) * weights_above
###Markdown
The `info()` method can pinpoint missing values and wrong data types:
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 765 entries, 0 to 764
Data columns (total 10 columns):
date 765 non-null object
station 765 non-null object
PRCP 765 non-null float64
SNOW 577 non-null float64
SNWD 577 non-null float64
TMAX 765 non-null float64
TMIN 765 non-null float64
TOBS 398 non-null float64
WESF 11 non-null float64
inclement_weather 408 non-null object
dtypes: float64(7), object(3)
memory usage: 50.8+ KB
###Markdown
We can use `pd.isnull()`/`pd.isna()` or the `isna()`/`isnull()` method of the series to find nulls:
###Code
contain_nulls = df[
df.SNOW.isnull() | df.SNWD.isna()\
| pd.isnull(df.TOBS) | pd.isna(df.WESF)\
| df.inclement_weather.isna()
]
contain_nulls.shape[0]
contain_nulls.head(10)
###Output
_____no_output_____
###Markdown
Note that we can't check if we have `NaN` like this:
###Code
df[df.inclement_weather == 'NaN'].shape[0]
###Output
_____no_output_____
###Markdown
This is because it is actually `np.nan`. However, notice this also doesn't work:
###Code
import numpy as np
df[df.inclement_weather == np.nan].shape[0]
###Output
_____no_output_____
###Markdown
We have to use one of the methods discussed earlier for this to work:
###Code
df[df.inclement_weather.isna()].shape[0]
###Output
_____no_output_____
###Markdown
We can find `-inf`/`inf` by comparing to `-np.inf`/`np.inf`:
###Code
df[df.SNWD.isin([-np.inf, np.inf])].shape[0]
###Output
_____no_output_____
###Markdown
Rather than do this for each column, we can write a function that will use a [dictionary comprehension](https://www.python.org/dev/peps/pep-0274/) to check all the columns for us:
###Code
import numpy as np
def get_inf_count(df):
"""Find the number of inf/-inf values per column in the dataframe"""
return {
col : df[df[col].isin([np.inf, -np.inf])].shape[0] for col in df.columns
}
get_inf_count(df)
###Output
_____no_output_____
###Markdown
Before we can decide how to handle the infinite values of snow depth, we should look at the summary statistics for snowfall which form a big part in determining the snow depth:
###Code
pd.DataFrame({
'np.inf Snow Depth': df[df.SNWD == np.inf].SNOW.describe(),
'-np.inf Snow Depth': df[df.SNWD == -np.inf].SNOW.describe()
}).T
###Output
_____no_output_____
###Markdown
Let's now look into the `date` and `station` columns. We saw the `?` for station earlier, so we know that was the other unique value. However, we see that some dates are present 8 times in the data and we only have 324 days meaning we are also missing days:
###Code
df.describe(include='object')
###Output
_____no_output_____
###Markdown
We can use the `duplicated()` method to find duplicate rows:
###Code
df[df.duplicated()].shape[0]
###Output
_____no_output_____
###Markdown
The default for `keep` is `'first'` meaning it won't show the first row that the duplicated data was seen in; we can pass in `False` to see it though:
###Code
df[df.duplicated(keep=False)].shape[0]
###Output
_____no_output_____
###Markdown
We can also specify the columns to use:
###Code
df[df.duplicated(['date', 'station'])].shape[0]
###Output
_____no_output_____
###Markdown
Let's look at a few duplicates. Just in the few values we see here, we know that the top 4 are actually in the data 6 times because by default we aren't seeing their first occurrence:
###Code
df[df.duplicated()].head()
###Output
_____no_output_____
###Markdown
Mitigating Issues Handling duplicated dataSince we know we have NY weather data and noticed we only had two entries for `station`, we may decide to drop the `station` column because we are only interested in the weather data. However, when dealing with duplicate data, we need to think of the ramifications of removing it. Notice we only have data for the `WESF` column when the station is `?`:
###Code
df[df.WESF.notna()].station.unique()
###Output
_____no_output_____
###Markdown
If we determine it won't impact our analysis, we can use `drop_duplicates()` to remove them:
###Code
# save this information for later
station_qm_wesf = df[df.station == '?'].WESF
# sort ? to the bottom
df.sort_values('station', ascending=False, inplace=True)
# drop duplicates based on the date column keeping the first occurrence
# which will be the valid station if it has data
df_deduped = df.drop_duplicates('date').drop(
# remove the station column because we are done with it
# and WESF because we need to replace it later
columns=['station', 'WESF']
).sort_values('date').assign( # sort by the date
# add back the WESF column which will be properly matched because of the index
WESF=station_qm_wesf
)
df_deduped.shape
###Output
_____no_output_____
###Markdown
Check out the 4th row, we have `WESF` in the correct spot thanks to the index:
###Code
df_deduped.head()
###Output
_____no_output_____
###Markdown
Dealing with nullsWe could drop nulls, replace them with some arbitrary value, or impute them using the surrounding data. Each of these options may have ramifications, so we must choose wisely.We can use `dropna()` to drop rows where any column has a null value. The default options leave us without data:
###Code
df_deduped.dropna().shape
###Output
_____no_output_____
###Markdown
If we pass `how='all'`, we can choose to only drop rows where everything is null, but this removes nothing:
###Code
df_deduped.dropna(how='all').shape
###Output
_____no_output_____
###Markdown
We can use just a subset of columns to determine what to drop with the `subset` argument:
###Code
df_deduped.dropna(
how='all', subset=['inclement_weather', 'SNOW', 'SNWD']
).shape
###Output
_____no_output_____
###Markdown
This can also be performed along columns, and we can also require a certain number of null values before we drop the data:
###Code
df_deduped.dropna(axis='columns', thresh=df_deduped.shape[0]*.75).columns
###Output
_____no_output_____
###Markdown
We can choose to fill in the null values instead with `fillna()`:
###Code
df_deduped.loc[:,'WESF'].fillna(0, inplace=True)
df_deduped.head()
###Output
_____no_output_____
###Markdown
At this point we have done every we can without distorting the data. We know that we are missing dates, but if we reindex, we don't know how to fill in the NaN data. With the weather data, we can't assume because it snowed one day that it will snow the next or that the temperature will be the same. For this reason, note that the next few examples are just for illustrative purposes only—just because we can do something doesn't mean we should.That being said, let's try to address some of remaining issues with the temperature data. We know that when `TMAX` is the temperature of the Sun, it must be because there was no measured value, so let's replace it with `NaN` and then we will make an assumption that the temperature won't change drastically day-to-day. Note that this is actually a big assumption, but it will allow us to understand how `fillna()` works when we provide a strategy through the `method` parameter. We will also do this for `TMIN` which currently uses -40°C for its placeholder when we know that the coldest temperature ever recorded in NYC was -15°F (-26.1°C) on February 9, 1934.The `fillna()` method gives us 2 options for the `method` parameter:- 'ffill' to forward fill- 'bfill' to back fill*Note that `'nearest'` is missing because we are not reindexing.*Here, we will use `'ffill'` to show how this works:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.replace(5505, np.nan).fillna(method='ffill'),
TMIN=lambda x: x.TMIN.replace(-40, np.nan).fillna(method='ffill')
).head()
###Output
_____no_output_____
###Markdown
We can use `np.nan_to_num()` to turn `np.nan` into 0 and `-np.inf`/`np.inf` into large negative or positive finite numbers:
###Code
df_deduped.assign(
SNWD=lambda x: np.nan_to_num(x.SNWD)
).head()
###Output
_____no_output_____
###Markdown
We can couple `fillna()` with other types of calculations for interpolation. Here we replace missing values of `TMAX` with the median of all `TMAX` values, `TMIN` with the median of all `TMIN` values, and `TOBS` to the average of the `TMAX` and `TMIN` values. Since we place `TOBS` last, we have access to the imputed values for `TMIN` and `TMAX` in the calculation:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.replace(5505, np.nan).fillna(x.TMIN.median()),
TMIN=lambda x: x.TMIN.replace(-40, np.nan).fillna(x.TMIN.median()),
# average of TMAX and TMIN
TOBS=lambda x: x.TOBS.fillna((x.TMAX + x.TMIN) / 2)
).head()
###Output
_____no_output_____
###Markdown
We can also use `apply()` for running the same calculation across columns. For example, let's fill all missing values with their rolling 7 day median of their values, setting the number of periods required for the calculation to 0 to ensure we don't introduce more extra `NaN` values. (Rolling calculations will be covered in [chapter 4](https://github.com/stefmolin/Hands-On-Data-Analysis-with-Pandas/tree/master/ch_04).) We need to set the `date` column as the index so `apply()` doesn't try to take the rolling 7 day median of the date:
###Code
df_deduped.assign(
# make TMAX and TMIN NaN where appropriate
TMAX=lambda x: x.TMAX.replace(5505, np.nan),
TMIN=lambda x: x.TMIN.replace(-40, np.nan)
).set_index('date').apply(
# rolling calculations will be covered in chapter 4, this is a rolling 7 day median
# we set min_periods (# of periods required for calculation) to 0 so we always get a result
lambda x: x.fillna(x.rolling(7, min_periods=0).median())
).head(10)
###Output
_____no_output_____
###Markdown
The last strategy we could try is interpolation with the `interpolate()` method. We specify the `method` parameter with the interpolation strategy to use. There are many options, but we will stick with the default of `'linear'`, which will treat values as evenly spaced and place missing values in the middle of existing ones. We have some missing data, so we will reindex first. Look at January 9th, which we didn't have before—the values for `TMAX`, `TMIN`, and `TOBS` are the average of values the day prior (January 8th) and the day after (January 10th):
###Code
df_deduped.assign(
# make TMAX and TMIN NaN where appropriate
TMAX=lambda x: x.TMAX.replace(5505, np.nan),
TMIN=lambda x: x.TMIN.replace(-40, np.nan),
date=lambda x: pd.to_datetime(x.date)
).set_index('date').reindex(
pd.date_range('2018-01-01', '2018-12-31', freq='D')
).apply(
lambda x: x.interpolate()
).head(10)
###Output
_____no_output_____
###Markdown
Handling duplicate, missing, or invalid data About the dataIn this notebook, we will using daily weather data that was taken from the [National Centers for Environmental Information (NCEI) API](https://www.ncdc.noaa.gov/cdo-web/webservices/v2) and altered to introduce many common problems faced when working with data. *Note: The NCEI is part of the National Oceanic and Atmospheric Administration (NOAA) and, as you can see from the URL for the API, this resource was created when the NCEI was called the NCDC. Should the URL for this resource change in the future, you can search for "NCEI weather API" to find the updated one.* Background on the dataData meanings:- `PRCP`: precipitation in millimeters- `SNOW`: snowfall in millimeters- `SNWD`: snow depth in millimeters- `TMAX`: maximum daily temperature in Celsius- `TMIN`: minimum daily temperature in Celsius- `TOBS`: temperature at time of observation in Celsius- `WESF`: water equivalent of snow in millimetersSome important facts to get our bearings:- According to the National Weather Service, the coldest temperature ever recorded in Central Park was -15°F (-26.1°C) on February 9, 1934: [source](https://www.weather.gov/media/okx/Climate/CentralPark/extremes.pdf) - The temperature of the Sun's photosphere is approximately 5,505°C: [source](https://en.wikipedia.org/wiki/Sun) SetupWe need to import `pandas` and read in the dirty data to get started:
###Code
import pandas as pd
df = pd.read_csv('data/dirty_data.csv')
###Output
_____no_output_____
###Markdown
Finding problematic dataA good first step is to look at some rows:
###Code
df.head()
###Output
_____no_output_____
###Markdown
Looking at summary statistics can reveal strange or missing values:
###Code
df.describe()
###Output
/home/stefaniemolin/book_env/lib/python3.7/site-packages/numpy/lib/function_base.py:3968: RuntimeWarning: invalid value encountered in multiply
x2 = take(ap, indices_above, axis=axis) * weights_above
###Markdown
The `info()` method can pinpoint missing values and wrong data types:
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 765 entries, 0 to 764
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 765 non-null object
1 station 765 non-null object
2 PRCP 765 non-null float64
3 SNOW 577 non-null float64
4 SNWD 577 non-null float64
5 TMAX 765 non-null float64
6 TMIN 765 non-null float64
7 TOBS 398 non-null float64
8 WESF 11 non-null float64
9 inclement_weather 408 non-null object
dtypes: float64(7), object(3)
memory usage: 59.9+ KB
###Markdown
We can use the `isna()`/`isnull()` method of the series to find nulls:
###Code
contain_nulls = df[
df.SNOW.isna() | df.SNWD.isna() | df.TOBS.isna()
| df.WESF.isna() | df.inclement_weather.isna()
]
contain_nulls.shape[0]
contain_nulls.head(10)
###Output
_____no_output_____
###Markdown
Note that we can't check if we have `NaN` like this:
###Code
df[df.inclement_weather == 'NaN'].shape[0]
###Output
_____no_output_____
###Markdown
This is because it is actually `np.nan`. However, notice this also doesn't work:
###Code
import numpy as np
df[df.inclement_weather == np.nan].shape[0]
###Output
_____no_output_____
###Markdown
We have to use one of the methods discussed earlier for this to work:
###Code
df[df.inclement_weather.isna()].shape[0]
###Output
_____no_output_____
###Markdown
We can find `-inf`/`inf` by comparing to `-np.inf`/`np.inf`:
###Code
df[df.SNWD.isin([-np.inf, np.inf])].shape[0]
###Output
_____no_output_____
###Markdown
Rather than do this for each column, we can write a function that will use a [dictionary comprehension](https://www.python.org/dev/peps/pep-0274/) to check all the columns for us:
###Code
def get_inf_count(df):
"""Find the number of inf/-inf values per column in the dataframe"""
return {
col: df[df[col].isin([np.inf, -np.inf])].shape[0] for col in df.columns
}
get_inf_count(df)
###Output
_____no_output_____
###Markdown
Before we can decide how to handle the infinite values of snow depth, we should look at the summary statistics for snowfall, which forms a big part in determining the snow depth:
###Code
pd.DataFrame({
'np.inf Snow Depth': df[df.SNWD == np.inf].SNOW.describe(),
'-np.inf Snow Depth': df[df.SNWD == -np.inf].SNOW.describe()
}).T
###Output
_____no_output_____
###Markdown
Let's now look into the `date` and `station` columns. We saw the `?` for station earlier, so we know that was the other unique value. However, we see that some dates are present 8 times in the data and we only have 324 days meaning we are also missing days:
###Code
df.describe(include='object')
###Output
_____no_output_____
###Markdown
We can use the `duplicated()` method to find duplicate rows:
###Code
df[df.duplicated()].shape[0]
###Output
_____no_output_____
###Markdown
The default for `keep` is `'first'` meaning it won't show the first row that the duplicated data was seen in; we can pass in `False` to see it though:
###Code
df[df.duplicated(keep=False)].shape[0]
###Output
_____no_output_____
###Markdown
We can also specify the columns to use:
###Code
df[df.duplicated(['date', 'station'])].shape[0]
###Output
_____no_output_____
###Markdown
Let's look at a few duplicates. Just in the few values we see here, we know that the top 4 are actually in the data 6 times because by default we aren't seeing their first occurrence:
###Code
df[df.duplicated()].head()
###Output
_____no_output_____
###Markdown
Mitigating Issues Handling duplicated dataSince we know we have NY weather data and noticed we only had two entries for `station`, we may decide to drop the `station` column because we are only interested in the weather data. However, when dealing with duplicate data, we need to think of the ramifications of removing it. Notice we only have data for the `WESF` column when the station is `?`:
###Code
df[df.WESF.notna()].station.unique()
###Output
_____no_output_____
###Markdown
If we determine it won't impact our analysis, we can use `drop_duplicates()` to remove them:
###Code
# 1. make the date a datetime
df.date = pd.to_datetime(df.date)
# 2. save this information for later
station_qm_wesf = df[df.station == '?'].drop_duplicates('date').set_index('date').WESF
#station이 ?값을 갖는 df
# 3. sort ? to the bottom
df.sort_values('station', ascending=False, inplace=True)
# 4. drop duplicates based on the date column keeping the first occurrence
# which will be the valid station if it has data
df_deduped = df.drop_duplicates('date')
# 5. remove the station column because we are done with it
df_deduped = df_deduped.drop(columns='station').set_index('date').sort_index()
#station 칼럼이 제거된 df
# 6. take valid station's WESF and fall back on station ? if it is null
df_deduped = df_deduped.assign(
WESF=lambda x: x.WESF.combine_first(station_qm_wesf)
)
#df_deduped의 WESF값이 누락된 경우, station이 ?값을 갖는 df의 WESF값으로 채워넣음.
df_deduped.shape
###Output
_____no_output_____
###Markdown
Here we used the `combine_first()` method to coalesce the values to the first non-null entry; this means that if we had data from both stations, we would first take the value provided by the named station and if (and only if) that station was null would we take the value from the station named `?`. The following table contains some examples of how this would play out:| station GHCND:USC00280907 | station ? | result of `combine_first()` || :---: | :---: | :---: || 1 | 17 | 1 || 1 | `NaN` | 1 || `NaN` | 17 | 17 || `NaN` | `NaN` | `NaN` |Check out the 4th row—we have `WESF` in the correct spot thanks to the index:
###Code
df_deduped.head()
###Output
_____no_output_____
###Markdown
Dealing with nullsWe could drop nulls, replace them with some arbitrary value, or impute them using the surrounding data. Each of these options may have ramifications, so we must choose wisely.We can use `dropna()` to drop rows where any column has a null value. The default options leave us hardly any data:
###Code
df_deduped.dropna().shape
###Output
_____no_output_____
###Markdown
If we pass `how='all'`, we can choose to only drop rows where everything is null, but this removes nothing:
###Code
df_deduped.dropna(how='all').shape
# 모든 행이 NA인 것을 drop함
# default는 how = 'any'
###Output
_____no_output_____
###Markdown
We can use just a subset of columns to determine what to drop with the `subset` argument:
###Code
df_deduped.dropna(
how='all', subset=['inclement_weather', 'SNOW', 'SNWD']
).shape
#subset안에 들어가지 않는 칼럼에서는 NA값이 제거되지 않음.
###Output
_____no_output_____
###Markdown
This can also be performed along columns, and we can also require a certain number of null values before we drop the data:
###Code
df_deduped.dropna(axis='columns', thresh=df_deduped.shape[0] * .75).columns
# axis =1과 동일
# 최소한 행의 75%가 NA인 칼럼을 제거한 df_deduped의 칼럼은 아래와 같음
# WESF 칼럼만 제거됨
###Output
_____no_output_____
###Markdown
We can choose to fill in the null values instead with `fillna()`:
###Code
df_deduped.loc[:,'WESF'].fillna(0, inplace=True)
df_deduped.head()
###Output
_____no_output_____
###Markdown
At this point we have done everything we can without distorting the data. We know that we are missing dates, but if we reindex, we don't know how to fill in the `NaN` data. With the weather data, we can't assume because it snowed one day that it will snow the next or that the temperature will be the same. For this reason, note that the next few examples are just for illustrative purposes only—just because we can do something doesn't mean we should.That being said, let's try to address some of remaining issues with the temperature data. We know that when `TMAX` is the temperature of the Sun, it must be because there was no measured value, so let's replace it with `NaN`. We will also do so for `TMIN` which currently uses -40°C for its placeholder when we know that the coldest temperature ever recorded in NYC was -15°F (-26.1°C) on February 9, 1934:
###Code
df_deduped = df_deduped.assign(
TMAX=lambda x: x.TMAX.replace(5505, np.nan),
TMIN=lambda x: x.TMIN.replace(-40, np.nan),
)
###Output
_____no_output_____
###Markdown
We will also make an assumption that the temperature won't change drastically day-to-day. Note that this is actually a big assumption, but it will allow us to understand how `fillna()` works when we provide a strategy through the `method` parameter. The `fillna()` method gives us 2 options for the `method` parameter:- `'ffill'` to forward-fill- `'bfill'` to back-fill*Note that `'nearest'` is missing because we are not reindexing.*Here, we will use `'ffill'` to show how this works:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.fillna(method='ffill'),
TMIN=lambda x: x.TMIN.fillna(method='ffill')
).head()
###Output
_____no_output_____
###Markdown
We can use `np.nan_to_num()` to turn `np.nan` into 0 and `-np.inf`/`np.inf` into large negative or positive finite numbers:
###Code
df_deduped.assign(
SNWD=lambda x: np.nan_to_num(x.SNWD)
).head()
# 핸들링 가능한 최소값, 최대값으로 바꾸어줌
###Output
_____no_output_____
###Markdown
Depending on the data we are working with, we can use the `clip()` method as an alternative to `np.nan_to_num()`. The `clip()` method makes it possible to cap values at a specific minimum and/or maximum threshold. Since `SNWD` can't be negative, let's use `clip()` to enforce a lower bound of zero. To show how the upper bound works, let's use the value of `SNOW`:
###Code
df_deduped.assign(
SNWD=lambda x: x.SNWD.clip(0, x.SNOW)
).head()
###Output
_____no_output_____
###Markdown
We can couple `fillna()` with other types of calculations. Here we replace missing values of `TMAX` with the median of all `TMAX` values, `TMIN` with the median of all `TMIN` values, and `TOBS` to the average of the `TMAX` and `TMIN` values. Since we place `TOBS` last, we have access to the imputed values for `TMIN` and `TMAX` in the calculation:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.fillna(x.TMAX.median()),
TMIN=lambda x: x.TMIN.fillna(x.TMIN.median()),
# average of TMAX and TMIN
TOBS=lambda x: x.TOBS.fillna((x.TMAX + x.TMIN) / 2)
).head()
###Output
_____no_output_____
###Markdown
We can also use `apply()` for running the same calculation across columns. For example, let's fill all missing values with their rolling 7-day median of their values, setting the number of periods required for the calculation to 0 to ensure we don't introduce more extra `NaN` values. Rolling calculations will be covered in chapter 4, so this is a preview:
###Code
df_deduped.apply(
# rolling calculations will be covered in chapter 4, this is a rolling 7-day median
# we set min_periods (# of periods required for calculation) to 0 so we always get a result
lambda x: x.fillna(x.rolling(7, min_periods=0).median())
).head(10)
###Output
_____no_output_____
###Markdown
The last strategy we could try is interpolation with the `interpolate()` method. We specify the `method` parameter with the interpolation strategy to use. There are many options, but we will stick with the default of `'linear'`, which will treat values as evenly spaced and place missing values in the middle of existing ones. We have some missing data, so we will reindex first. Look at January 9th, which we didn't have before—the values for `TMAX`, `TMIN`, and `TOBS` are the average of values the day prior (January 8th) and the day after (January 10th):
###Code
df_deduped\
.reindex(pd.date_range('2018-01-01', '2018-12-31', freq='D'))\
.apply(lambda x: x.interpolate())\
.head(10)
###Output
_____no_output_____
###Markdown
Handling duplicate, missing, or invalid data About the dataIn this notebook, we will using daily weather data that was taken from the [National Centers for Environmental Information (NCEI) API](https://www.ncdc.noaa.gov/cdo-web/webservices/v2) and altered to introduce many common problems faced when working with data. *Note: The NCEI is part of the National Oceanic and Atmospheric Administration (NOAA) and, as you can see from the URL for the API, this resource was created when the NCEI was called the NCDC. Should the URL for this resource change in the future, you can search for "NCEI weather API" to find the updated one.* Background on the dataData meanings:- `PRCP`: precipitation in millimeters- `SNOW`: snowfall in millimeters- `SNWD`: snow depth in millimeters- `TMAX`: maximum daily temperature in Celsius- `TMIN`: minimum daily temperature in Celsius- `TOBS`: temperature at time of observation in Celsius- `WESF`: water equivalent of snow in millimetersSome important facts to get our bearings:- According to the National Weather Service, the coldest temperature ever recorded in Central Park was -15°F (-26.1°C) on February 9, 1934: [source](https://www.weather.gov/media/okx/Climate/CentralPark/extremes.pdf) - The temperature of the Sun's photosphere is approximately 5,505°C: [source](https://en.wikipedia.org/wiki/Sun) SetupWe need to import `pandas` and read in the dirty data to get started:
###Code
import pandas as pd
df = pd.read_csv('data/dirty_data.csv')
###Output
_____no_output_____
###Markdown
Finding problematic dataA good first step is to look at some rows:
###Code
df.head()
###Output
_____no_output_____
###Markdown
Looking at summary statistics can reveal strange or missing values:
###Code
df.describe()
###Output
/home/stefaniemolin/book_env/lib/python3.7/site-packages/numpy/lib/function_base.py:3968: RuntimeWarning: invalid value encountered in multiply
x2 = take(ap, indices_above, axis=axis) * weights_above
###Markdown
The `info()` method can pinpoint missing values and wrong data types:
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 765 entries, 0 to 764
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 765 non-null object
1 station 765 non-null object
2 PRCP 765 non-null float64
3 SNOW 577 non-null float64
4 SNWD 577 non-null float64
5 TMAX 765 non-null float64
6 TMIN 765 non-null float64
7 TOBS 398 non-null float64
8 WESF 11 non-null float64
9 inclement_weather 408 non-null object
dtypes: float64(7), object(3)
memory usage: 59.9+ KB
###Markdown
We can use the `isna()`/`isnull()` method of the series to find nulls:
###Code
contain_nulls = df[
df.SNOW.isna() | df.SNWD.isna() | df.TOBS.isna()
| df.WESF.isna() | df.inclement_weather.isna()
]
contain_nulls.shape[0]
contain_nulls.head(10)
###Output
_____no_output_____
###Markdown
Note that we can't check if we have `NaN` like this:
###Code
df[df.inclement_weather == 'NaN'].shape[0]
###Output
_____no_output_____
###Markdown
This is because it is actually `np.nan`. However, notice this also doesn't work:
###Code
import numpy as np
df[df.inclement_weather == np.nan].shape[0]
###Output
_____no_output_____
###Markdown
We have to use one of the methods discussed earlier for this to work:
###Code
df[df.inclement_weather.isna()].shape[0]
###Output
_____no_output_____
###Markdown
We can find `-inf`/`inf` by comparing to `-np.inf`/`np.inf`:
###Code
df[df.SNWD.isin([-np.inf, np.inf])].shape[0]
###Output
_____no_output_____
###Markdown
Rather than do this for each column, we can write a function that will use a [dictionary comprehension](https://www.python.org/dev/peps/pep-0274/) to check all the columns for us:
###Code
def get_inf_count(df):
"""Find the number of inf/-inf values per column in the dataframe"""
return {
col: df[df[col].isin([np.inf, -np.inf])].shape[0] for col in df.columns
}
get_inf_count(df)
###Output
_____no_output_____
###Markdown
Before we can decide how to handle the infinite values of snow depth, we should look at the summary statistics for snowfall, which forms a big part in determining the snow depth:
###Code
pd.DataFrame({
'np.inf Snow Depth': df[df.SNWD == np.inf].SNOW.describe(),
'-np.inf Snow Depth': df[df.SNWD == -np.inf].SNOW.describe()
}).T
###Output
_____no_output_____
###Markdown
Let's now look into the `date` and `station` columns. We saw the `?` for station earlier, so we know that was the other unique value. However, we see that some dates are present 8 times in the data and we only have 324 days meaning we are also missing days:
###Code
df.describe(include='object')
###Output
_____no_output_____
###Markdown
We can use the `duplicated()` method to find duplicate rows:
###Code
df[df.duplicated()].shape[0]
###Output
_____no_output_____
###Markdown
The default for `keep` is `'first'` meaning it won't show the first row that the duplicated data was seen in; we can pass in `False` to see it though:
###Code
df[df.duplicated(keep=False)].shape[0]
###Output
_____no_output_____
###Markdown
We can also specify the columns to use:
###Code
df[df.duplicated(['date', 'station'])].shape[0]
###Output
_____no_output_____
###Markdown
Let's look at a few duplicates. Just in the few values we see here, we know that the top 4 are actually in the data 6 times because by default we aren't seeing their first occurrence:
###Code
df[df.duplicated()].head()
###Output
_____no_output_____
###Markdown
Mitigating Issues Handling duplicated dataSince we know we have NY weather data and noticed we only had two entries for `station`, we may decide to drop the `station` column because we are only interested in the weather data. However, when dealing with duplicate data, we need to think of the ramifications of removing it. Notice we only have data for the `WESF` column when the station is `?`:
###Code
df[df.WESF.notna()].station.unique()
###Output
_____no_output_____
###Markdown
If we determine it won't impact our analysis, we can use `drop_duplicates()` to remove them:
###Code
# 1. make the date a datetime
df.date = pd.to_datetime(df.date)
# 2. save this information for later
station_qm_wesf = df[df.station == '?'].drop_duplicates('date').set_index('date').WESF
# 3. sort ? to the bottom
df.sort_values('station', ascending=False, inplace=True)
# 4. drop duplicates based on the date column keeping the first occurrence
# which will be the valid station if it has data
df_deduped = df.drop_duplicates('date')
# 5. remove the station column because we are done with it
df_deduped = df_deduped.drop(columns='station').set_index('date').sort_index()
# 6. take valid station's WESF and fall back on station ? if it is null
df_deduped = df_deduped.assign(
WESF=lambda x: x.WESF.combine_first(station_qm_wesf)
)
df_deduped.shape
###Output
_____no_output_____
###Markdown
Here we used the `combine_first()` method to coalesce the values to the first non-null entry; this means that if we had data from both stations, we would first take the value provided by the named station and if (and only if) that station was null would we take the value from the station named `?`. The following table contains some examples of how this would play out:| station GHCND:USC00280907 | station ? | result of `combine_first()` || :---: | :---: | :---: || 1 | 17 | 1 || 1 | `NaN` | 1 || `NaN` | 17 | 17 || `NaN` | `NaN` | `NaN` |Check out the 4th row—we have `WESF` in the correct spot thanks to the index:
###Code
df_deduped.head()
###Output
_____no_output_____
###Markdown
Dealing with nullsWe could drop nulls, replace them with some arbitrary value, or impute them using the surrounding data. Each of these options may have ramifications, so we must choose wisely.We can use `dropna()` to drop rows where any column has a null value. The default options leave us hardly any data:
###Code
df_deduped.dropna().shape
###Output
_____no_output_____
###Markdown
If we pass `how='all'`, we can choose to only drop rows where everything is null, but this removes nothing:
###Code
df_deduped.dropna(how='all').shape
###Output
_____no_output_____
###Markdown
We can use just a subset of columns to determine what to drop with the `subset` argument:
###Code
df_deduped.dropna(
how='all', subset=['inclement_weather', 'SNOW', 'SNWD']
).shape
###Output
_____no_output_____
###Markdown
This can also be performed along columns, and we can also require a certain number of null values before we drop the data:
###Code
df_deduped.dropna(axis='columns', thresh=df_deduped.shape[0] * .75).columns
###Output
_____no_output_____
###Markdown
We can choose to fill in the null values instead with `fillna()`:
###Code
df_deduped.loc[:,'WESF'].fillna(0, inplace=True)
df_deduped.head()
###Output
_____no_output_____
###Markdown
At this point we have done everything we can without distorting the data. We know that we are missing dates, but if we reindex, we don't know how to fill in the `NaN` data. With the weather data, we can't assume because it snowed one day that it will snow the next or that the temperature will be the same. For this reason, note that the next few examples are just for illustrative purposes only—just because we can do something doesn't mean we should.That being said, let's try to address some of remaining issues with the temperature data. We know that when `TMAX` is the temperature of the Sun, it must be because there was no measured value, so let's replace it with `NaN`. We will also do so for `TMIN` which currently uses -40°C for its placeholder when we know that the coldest temperature ever recorded in NYC was -15°F (-26.1°C) on February 9, 1934:
###Code
df_deduped = df_deduped.assign(
TMAX=lambda x: x.TMAX.replace(5505, np.nan),
TMIN=lambda x: x.TMIN.replace(-40, np.nan),
)
###Output
_____no_output_____
###Markdown
We will also make an assumption that the temperature won't change drastically day-to-day. Note that this is actually a big assumption, but it will allow us to understand how `fillna()` works when we provide a strategy through the `method` parameter. The `fillna()` method gives us 2 options for the `method` parameter:- `'ffill'` to forward-fill- `'bfill'` to back-fill*Note that `'nearest'` is missing because we are not reindexing.*Here, we will use `'ffill'` to show how this works:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.fillna(method='ffill'),
TMIN=lambda x: x.TMIN.fillna(method='ffill')
).head()
###Output
_____no_output_____
###Markdown
We can use `np.nan_to_num()` to turn `np.nan` into 0 and `-np.inf`/`np.inf` into large negative or positive finite numbers:
###Code
df_deduped.assign(
SNWD=lambda x: np.nan_to_num(x.SNWD)
).head()
###Output
_____no_output_____
###Markdown
Depending on the data we are working with, we can use the `clip()` method as an alternative to `np.nan_to_num()`. The `clip()` method makes it possible to cap values at a specific minimum and/or maximum threshold. Since `SNWD` can't be negative, let's use `clip()` to enforce a lower bound of zero. To show how the upper bound works, let's use the value of `SNOW`:
###Code
df_deduped.assign(
SNWD=lambda x: x.SNWD.clip(0, x.SNOW)
).head()
###Output
_____no_output_____
###Markdown
We can couple `fillna()` with other types of calculations. Here we replace missing values of `TMAX` with the median of all `TMAX` values, `TMIN` with the median of all `TMIN` values, and `TOBS` to the average of the `TMAX` and `TMIN` values. Since we place `TOBS` last, we have access to the imputed values for `TMIN` and `TMAX` in the calculation:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.fillna(x.TMAX.median()),
TMIN=lambda x: x.TMIN.fillna(x.TMIN.median()),
# average of TMAX and TMIN
TOBS=lambda x: x.TOBS.fillna((x.TMAX + x.TMIN) / 2)
).head()
###Output
_____no_output_____
###Markdown
We can also use `apply()` for running the same calculation across columns. For example, let's fill all missing values with their rolling 7-day median of their values, setting the number of periods required for the calculation to 0 to ensure we don't introduce more extra `NaN` values. Rolling calculations will be covered in chapter 4, so this is a preview:
###Code
df_deduped.apply(
# rolling calculations will be covered in chapter 4, this is a rolling 7-day median
# we set min_periods (# of periods required for calculation) to 0 so we always get a result
lambda x: x.fillna(x.rolling(7, min_periods=0).median())
).head(10)
###Output
_____no_output_____
###Markdown
The last strategy we could try is interpolation with the `interpolate()` method. We specify the `method` parameter with the interpolation strategy to use. There are many options, but we will stick with the default of `'linear'`, which will treat values as evenly spaced and place missing values in the middle of existing ones. We have some missing data, so we will reindex first. Look at January 9th, which we didn't have before—the values for `TMAX`, `TMIN`, and `TOBS` are the average of values the day prior (January 8th) and the day after (January 10th):
###Code
df_deduped\
.reindex(pd.date_range('2018-01-01', '2018-12-31', freq='D'))\
.apply(lambda x: x.interpolate())\
.head(10)
###Output
_____no_output_____
###Markdown
Handling duplicate, missing, or invalid data About the dataIn this notebook, we will using daily weather data that was taken from the [National Centers for Environmental Information (NCEI) API](https://www.ncdc.noaa.gov/cdo-web/webservices/v2) and altered to introduce many common problems faced when working with data. *Note: The NCEI is part of the National Oceanic and Atmospheric Administration (NOAA) and, as you can see from the URL for the API, this resource was created when the NCEI was called the NCDC. Should the URL for this resource change in the future, you can search for "NCEI weather API" to find the updated one.* Background on the dataData meanings:- `PRCP`: precipitation in millimeters- `SNOW`: snowfall in millimeters- `SNWD`: snow depth in millimeters- `TMAX`: maximum daily temperature in Celsius- `TMIN`: minimum daily temperature in Celsius- `TOBS`: temperature at time of observation in Celsius- `WESF`: water equivalent of snow in millimetersSome important facts to get our bearings:- According to the National Weather Service, the coldest temperature ever recorded in Central Park was -15°F (-26.1°C) on February 9, 1934: [source](https://www.weather.gov/media/okx/Climate/CentralPark/extremes.pdf) - The temperature of the Sun's photosphere is approximately 5,505°C: [source](https://en.wikipedia.org/wiki/Sun) SetupWe need to import `pandas` and read in the dirty data to get started:
###Code
import pandas as pd
df = pd.read_csv('data/dirty_data.csv')
###Output
_____no_output_____
###Markdown
Finding problematic dataA good first step is to look at some rows:
###Code
df.head()
###Output
_____no_output_____
###Markdown
Looking at summary statistics can reveal strange or missing values:
###Code
df.describe()
###Output
/Users/CarlSchick/opt/anaconda3/lib/python3.8/site-packages/numpy/lib/function_base.py:4009: RuntimeWarning: invalid value encountered in subtract
diff_b_a = subtract(b, a)
###Markdown
The `info()` method can pinpoint missing values and wrong data types:
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 765 entries, 0 to 764
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 765 non-null object
1 station 765 non-null object
2 PRCP 765 non-null float64
3 SNOW 577 non-null float64
4 SNWD 577 non-null float64
5 TMAX 765 non-null float64
6 TMIN 765 non-null float64
7 TOBS 398 non-null float64
8 WESF 11 non-null float64
9 inclement_weather 408 non-null object
dtypes: float64(7), object(3)
memory usage: 59.9+ KB
###Markdown
We can use the `isna()`/`isnull()` method of the series to find nulls:
###Code
contain_nulls = df[
df.SNOW.isna() | df.SNWD.isna() | df.TOBS.isna()
| df.WESF.isna() | df.inclement_weather.isna()
]
contain_nulls.shape[0]
contain_nulls.head(10)
###Output
_____no_output_____
###Markdown
Note that we can't check if we have `NaN` like this:
###Code
df[df.inclement_weather == 'NaN'].shape[0]
###Output
_____no_output_____
###Markdown
This is because it is actually `np.nan`. However, notice this also doesn't work:
###Code
import numpy as np
df[df.inclement_weather == np.nan].shape[0]
###Output
_____no_output_____
###Markdown
We have to use one of the methods discussed earlier for this to work:
###Code
df[df.inclement_weather.isna()].shape[0]
###Output
_____no_output_____
###Markdown
We can find `-inf`/`inf` by comparing to `-np.inf`/`np.inf`:
###Code
df[df.SNWD.isin([-np.inf, np.inf])].shape[0]
###Output
_____no_output_____
###Markdown
Rather than do this for each column, we can write a function that will use a [dictionary comprehension](https://www.python.org/dev/peps/pep-0274/) to check all the columns for us:
###Code
def get_inf_count(df):
"""Find the number of inf/-inf values per column in the dataframe"""
return {
col: df[df[col].isin([np.inf, -np.inf])].shape[0] for col in df.columns
}
get_inf_count(df)
###Output
_____no_output_____
###Markdown
Before we can decide how to handle the infinite values of snow depth, we should look at the summary statistics for snowfall, which forms a big part in determining the snow depth:
###Code
pd.DataFrame({
'np.inf Snow Depth': df[df.SNWD == np.inf].SNOW.describe(),
'-np.inf Snow Depth': df[df.SNWD == -np.inf].SNOW.describe()
}).T
###Output
_____no_output_____
###Markdown
Let's now look into the `date` and `station` columns. We saw the `?` for station earlier, so we know that was the other unique value. However, we see that some dates are present 8 times in the data and we only have 324 days meaning we are also missing days:
###Code
df.describe(include='object')
###Output
_____no_output_____
###Markdown
We can use the `duplicated()` method to find duplicate rows:
###Code
df[df.duplicated()].shape[0]
###Output
_____no_output_____
###Markdown
The default for `keep` is `'first'` meaning it won't show the first row that the duplicated data was seen in; we can pass in `False` to see it though:
###Code
df[df.duplicated(keep=False)].shape[0]
###Output
_____no_output_____
###Markdown
We can also specify the columns to use:
###Code
df[df.duplicated(['date', 'station'])].shape[0]
###Output
_____no_output_____
###Markdown
Let's look at a few duplicates. Just in the few values we see here, we know that the top 4 are actually in the data 6 times because by default we aren't seeing their first occurrence:
###Code
df[df.duplicated()].head(20)
###Output
_____no_output_____
###Markdown
Mitigating Issues Handling duplicated dataSince we know we have NY weather data and noticed we only had two entries for `station`, we may decide to drop the `station` column because we are only interested in the weather data. However, when dealing with duplicate data, we need to think of the ramifications of removing it. Notice we only have data for the `WESF` column when the station is `?`:
###Code
df[df.WESF.notna()].station.unique()
###Output
_____no_output_____
###Markdown
If we determine it won't impact our analysis, we can use `drop_duplicates()` to remove them:
###Code
# 1. make the date a datetime
df.date = pd.to_datetime(df.date)
# 2. save this information for later
station_qm_wesf = df[df.station == '?'].drop_duplicates('date').set_index('date').WESF
# 3. sort ? to the bottom
df.sort_values('station', ascending=False, inplace=True)
# 4. drop duplicates based on the date column keeping the first occurrence
# which will be the valid station if it has data
df_deduped = df.drop_duplicates('date')
# 5. remove the station column because we are done with it
df_deduped = df_deduped.drop(columns='station').set_index('date').sort_index()
# 6. take valid station's WESF and fall back on station ? if it is null
df_deduped = df_deduped.assign(
WESF=lambda x: x.WESF.combine_first(station_qm_wesf)
)
df_deduped.shape
###Output
_____no_output_____
###Markdown
Here we used the `combine_first()` method to coalesce the values to the first non-null entry; this means that if we had data from both stations, we would first take the value provided by the named station and if (and only if) that station was null would we take the value from the station named `?`. The following table contains some examples of how this would play out:| station GHCND:USC00280907 | station ? | result of `combine_first()` || :---: | :---: | :---: || 1 | 17 | 1 || 1 | `NaN` | 1 || `NaN` | 17 | 17 || `NaN` | `NaN` | `NaN` |Check out the 4th row—we have `WESF` in the correct spot thanks to the index:
###Code
df_deduped.head()
###Output
_____no_output_____
###Markdown
Dealing with nullsWe could drop nulls, replace them with some arbitrary value, or impute them using the surrounding data. Each of these options may have ramifications, so we must choose wisely.We can use `dropna()` to drop rows where any column has a null value. The default options leave us hardly any data:
###Code
df_deduped.dropna().shape
###Output
_____no_output_____
###Markdown
If we pass `how='all'`, we can choose to only drop rows where everything is null, but this removes nothing:
###Code
df_deduped.dropna(how='all').shape
###Output
_____no_output_____
###Markdown
We can use just a subset of columns to determine what to drop with the `subset` argument:
###Code
df_deduped.dropna(
how='all', subset=['inclement_weather', 'SNOW', 'SNWD']
).shape
###Output
_____no_output_____
###Markdown
This can also be performed along columns, and we can also require a certain number of null values before we drop the data:
###Code
df_deduped.dropna(axis='columns', thresh=df_deduped.shape[0] * .75).columns
###Output
_____no_output_____
###Markdown
We can choose to fill in the null values instead with `fillna()`:
###Code
df_deduped.loc[:,'WESF'].fillna(0, inplace=True)
df_deduped.head()
###Output
_____no_output_____
###Markdown
At this point we have done everything we can without distorting the data. We know that we are missing dates, but if we reindex, we don't know how to fill in the `NaN` data. With the weather data, we can't assume because it snowed one day that it will snow the next or that the temperature will be the same. For this reason, note that the next few examples are just for illustrative purposes only—just because we can do something doesn't mean we should.That being said, let's try to address some of remaining issues with the temperature data. We know that when `TMAX` is the temperature of the Sun, it must be because there was no measured value, so let's replace it with `NaN`. We will also do so for `TMIN` which currently uses -40°C for its placeholder when we know that the coldest temperature ever recorded in NYC was -15°F (-26.1°C) on February 9, 1934:
###Code
df_deduped = df_deduped.assign(
TMAX=lambda x: x.TMAX.replace(5505, np.nan),
TMIN=lambda x: x.TMIN.replace(-40, np.nan),
)
###Output
_____no_output_____
###Markdown
We will also make an assumption that the temperature won't change drastically day-to-day. Note that this is actually a big assumption, but it will allow us to understand how `fillna()` works when we provide a strategy through the `method` parameter. The `fillna()` method gives us 2 options for the `method` parameter:- `'ffill'` to forward-fill- `'bfill'` to back-fill*Note that `'nearest'` is missing because we are not reindexing.*Here, we will use `'ffill'` to show how this works:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.fillna(method='ffill'),
TMIN=lambda x: x.TMIN.fillna(method='ffill')
).head()
###Output
_____no_output_____
###Markdown
We can use `np.nan_to_num()` to turn `np.nan` into 0 and `-np.inf`/`np.inf` into large negative or positive finite numbers:
###Code
df_deduped.assign(
SNWD=lambda x: np.nan_to_num(x.SNWD)
).head()
###Output
_____no_output_____
###Markdown
Depending on the data we are working with, we can use the `clip()` method as an alternative to `np.nan_to_num()`. The `clip()` method makes it possible to cap values at a specific minimum and/or maximum threshold. Since `SNWD` can't be negative, let's use `clip()` to enforce a lower bound of zero. To show how the upper bound works, let's use the value of `SNOW`:
###Code
df_deduped.assign(
SNWD=lambda x: x.SNWD.clip(0, x.SNOW)
).head()
###Output
_____no_output_____
###Markdown
We can couple `fillna()` with other types of calculations. Here we replace missing values of `TMAX` with the median of all `TMAX` values, `TMIN` with the median of all `TMIN` values, and `TOBS` to the average of the `TMAX` and `TMIN` values. Since we place `TOBS` last, we have access to the imputed values for `TMIN` and `TMAX` in the calculation:
###Code
df_deduped.assign(
TMAX=lambda x: x.TMAX.fillna(x.TMAX.median()),
TMIN=lambda x: x.TMIN.fillna(x.TMIN.median()),
# average of TMAX and TMIN
TOBS=lambda x: x.TOBS.fillna((x.TMAX + x.TMIN) / 2)
).head()
###Output
_____no_output_____
###Markdown
We can also use `apply()` for running the same calculation across columns. For example, let's fill all missing values with their rolling 7-day median of their values, setting the number of periods required for the calculation to 0 to ensure we don't introduce more extra `NaN` values. Rolling calculations will be covered in chapter 4, so this is a preview:
###Code
df_deduped.apply(
# rolling calculations will be covered in chapter 4, this is a rolling 7-day median
# we set min_periods (# of periods required for calculation) to 0 so we always get a result
lambda x: x.fillna(x.rolling(7, min_periods=0).median())
).head(10)
###Output
_____no_output_____
###Markdown
The last strategy we could try is interpolation with the `interpolate()` method. We specify the `method` parameter with the interpolation strategy to use. There are many options, but we will stick with the default of `'linear'`, which will treat values as evenly spaced and place missing values in the middle of existing ones. We have some missing data, so we will reindex first. Look at January 9th, which we didn't have before—the values for `TMAX`, `TMIN`, and `TOBS` are the average of values the day prior (January 8th) and the day after (January 10th):
###Code
df_deduped\
.reindex(pd.date_range('2018-01-01', '2018-12-31', freq='D'))\
.apply(lambda x: x.interpolate())\
.head(10)
###Output
_____no_output_____ |
examples/Cross.ipynb | ###Markdown
Imports
###Code
import sys
sys.path.append('../ClusterPlot')
sys.path.append('./utils')
import pandas as pd
import numpy as np
import seaborn as sns
# %matplotlib notebook # Uncomment for zoom in in 3D plot for paper
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from sklearn.metrics import pairwise_distances
from sklearn.cluster import Birch
from DataSetFactory import DataSetFactory
from ClusterPlot import ClusterPlot
RANDOM_STATE = 42
ds = DataSetFactory.get_dataset('cross7', random_state=RANDOM_STATE, sample=None, is_subset=False)
###Output
_____no_output_____
###Markdown
3D Plot
###Code
# %matplotlib notebook
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection='3d')
for l in ds.df[ds.label_col].unique():
xs = ds.df[ds.df[ds.label_col] == l]['X']
ys = ds.df[ds.df[ds.label_col] == l]['Y']
zs = ds.df[ds.df[ds.label_col] == l]['Z']
ax.scatter(xs, ys, zs, s=50, alpha=0.6, edgecolors='w')
# remove ticks
ax = plt.gca()
ax.xaxis.set_ticklabels([])
ax.yaxis.set_ticklabels([])
ax.zaxis.set_ticklabels([])
for line in ax.xaxis.get_ticklines():
line.set_visible(False)
for line in ax.yaxis.get_ticklines():
line.set_visible(False)
for line in ax.zaxis.get_ticklines():
line.set_visible(False)
ax.view_init(elev=50, azim=40)
plt.show()
X = ds.df[ds.feature_cols].values
y = ds.df[ds.label_col].values
###Output
_____no_output_____
###Markdown
Cluster Plots
###Code
blobs_cp = ClusterPlot(class_to_label=ds.class_to_label,
show_fig=True,
save_fig=False,
random_state=RANDOM_STATE,
birch_threshold=0.42,
alpha=0.2,
douglas_peucker_tolerance=0.5,
smooth_iter=3,
learning_rate=0.1,
show_label_level_plots=False,
show_loss_plot=False,
n_iter=1,
batch_size=0)
print(blobs_cp)
low_dim_blobs_cp = blobs_cp.fit_transform(X, y)
###Output
2020-05-16 13:45:25,075 - ClusterPlot-4763 - INFO - finding intra class anchors using birch
2020-05-16 13:45:25,075 - ClusterPlot-4763 - INFO - finding intra class anchors using birch
2020-05-16 13:45:28,258 - ClusterPlot-4763 - INFO - UnSupervised Dim Reduction
2020-05-16 13:45:28,258 - ClusterPlot-4763 - INFO - UnSupervised Dim Reduction
2020-05-16 13:45:28,262 - ClusterPlot-4763 - INFO - Dim Reduction only anchors
2020-05-16 13:45:28,262 - ClusterPlot-4763 - INFO - Dim Reduction only anchors
2020-05-16 13:45:29,219 - ClusterPlot-4763 - INFO - Dim Reduction only anchors - generate random points in low dim per anchor
2020-05-16 13:45:29,219 - ClusterPlot-4763 - INFO - Dim Reduction only anchors - generate random points in low dim per anchor
100%|████████████████████████████████████████████████████████████████████████████| 14384/14384 [04:01<00:00, 59.58it/s]
2020-05-16 13:49:34,216 - ClusterPlot-4763 - INFO - Starting iteration 1 loss = 0.8421052631578947
2020-05-16 13:49:34,216 - ClusterPlot-4763 - INFO - Starting iteration 1 loss = 0.8421052631578947
|
seq2seq/NewsHeadlines_SummerSchool.ipynb | ###Markdown
Generate News Headlines using RNNWe will use the kaggle Indian news headline dataset (https://www.kaggle.com/therohk/india-headlines-news-dataset).A cleaned dataset of 100,000 is produced from this. We want to generate new headlines.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
if use_cuda:
print('Yes! GPU!')
###Output
Yes! GPU!
###Markdown
Read Data
###Code
with open('news-headlines-trimmed.txt') as f:
data = f.read()
data = data.split('\n')[:1000] # fast training
print(len(data))
###Output
1000
###Markdown
Start of Sentence (SOS) is added to the begining of every headline. End of Sentence (EOS) is to indicate when to stop generating characters.
###Code
SOS = 0
EOS = 127
###Output
_____no_output_____
###Markdown
Encode sentence as sequence of one-hot vectors
###Code
# One hot encoding
def one_hotter(c):
vec = torch.zeros(128)
vec[ord(c)] = 1.0
return vec
def encode_sentence(s):
v = torch.zeros(1, len(s)+1, 128)
# append SOS
vec = torch.zeros(128)
vec[SOS] = 1.0
v[0, 0, :] = vec
for i in range(len(s)):
v[0, i+1, :] = one_hotter(s[i])
# append EOS
# vec = torch.zeros(128)
# vec[EOS] = 1.0
# v[0, len(s)+1, :] = vec
return v.to(device)
e = encode_sentence('ab')
###Output
_____no_output_____
###Markdown
Model
###Code
class RnnNet(nn.Module):
def __init__(self):
super(RnnNet, self).__init__()
self.input_dim = 128 # one-hot encoding of ascii
# self.seq_len = 28
self.hidden_dim = 100
self.batch_size = 1 # sorry! variable length sentences.
# We can pad and make batches though. But let's stick to simplicity
self.num_class = self.input_dim
self.rnn = nn.GRU(self.input_dim, self.hidden_dim, batch_first=True)
self.fc = nn.Linear(self.hidden_dim, self.num_class)
def forward(self, x, h0):
# h0 = torch.randn(1, self.batch_size, self.hidden_dim).to(device)
# run the LSTM along the sequences of length seq_len
x, h = self.rnn(x, h0) # dim: batch_size x seq_len x hidden_dim
# make the Variable contiguous in memory (a PyTorch artefact)
x = x.contiguous()
# reshape the Variable so that each row contains one token
x = x.view(-1, x.shape[2]) # dim: batch_size*seq_len x hidden_dim (note batch_size=1)
# apply the fully connected layer and obtain the output (before softmax) for each token
x = self.fc(x) # dim: batch_size*seq_len x num_class
# apply log softmax on each token's output (this is recommended over applying softmax
# since it is numerically more stable)
return F.log_softmax(x, dim=1), h # dim: batch_size*seq_len x num_class & dim(h): 1 x 1(batch) x hidden_dim
def genh(self):
return torch.randn(1, self.batch_size, self.hidden_dim).to(device)
model = RnnNet().to(device)
optimizer = optim.Adam(model.parameters(), lr=0.001)
len(data)
###Output
_____no_output_____
###Markdown
Train
###Code
from tqdm import trange
import logging
# logging.basicConfig(format='%(asctime)s [%(levelname)-8s] %(message)s')
# logger = logging.getLogger()
# logger.setLevel(logging.INFO)
###Output
_____no_output_____
###Markdown
Generate Heading
###Code
def gen_headlines(num=5):
model.eval()
for i in range(num):
gen= ''
h = model.genh()
i = 0
prev = torch.zeros(1, 1, 128).to(device)
prev[0,0,0] = 1.0
while(True):
output, h = model(prev, h)
s = torch.argmax(output, dim=1)
# Stop if EOS is generated
if s == 127:
continue
# update generated sentence
gen += chr(s)
prev = torch.zeros(1, 1, 128).to(device)
prev[0,0,s] = 1.0
i += 1
if i > 200:
break
print(gen)
###Output
_____no_output_____
###Markdown
Start Training
###Code
epochs = 10
for epoch in range(epochs):
model.train()
# Use tqdm for progress bar
t = trange(len(data))
print('\nepoch {}/{}'.format(epoch+1, epochs))
for i in t:
# Get the representation of sentence
d = data[i]
d = d.strip()
if len(d) == 0: # empty sentences are not allowed
break
enc_sen = encode_sentence(d)
h0 = model.genh()
output, _ = model(enc_sen, h0) # dim: seq_len x num_class
target = [ord(c) for c in d] + [EOS]
target = torch.LongTensor(target).to(device)
# zero param grads
optimizer.zero_grad()
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if i%100 == 0:
t.set_postfix(loss='{:05.3f}'.format(loss.item()))
# print samples from the language model
gen_headlines()
###Output
0%| | 37/10000 [00:00<00:27, 367.59it/s, loss=4.855]
###Markdown
Todo1. While generating, sample instead of argmax for next character2. Use multiple layers
###Code
gen_headlines()
###Output
Step by Step
Hawkings' day out
Dill sects police reality of thrents puckes
'sertt to a plast to a with polority
Hawkings' day out
|
tf2/colabs/imagenet_results.ipynb | ###Markdown
Note: The evals here have been run on GPU so they may not exactly match the results reported in the paper which were run on TPUs, however the difference in accuracy should not be more than 0.1%. Setup
###Code
import tensorflow as tf
import tensorflow_datasets as tfds
CROP_PROPORTION = 0.875 # Standard for ImageNet.
HEIGHT = 224
WIDTH = 224
def _compute_crop_shape(
image_height, image_width, aspect_ratio, crop_proportion):
"""Compute aspect ratio-preserving shape for central crop.
The resulting shape retains `crop_proportion` along one side and a proportion
less than or equal to `crop_proportion` along the other side.
Args:
image_height: Height of image to be cropped.
image_width: Width of image to be cropped.
aspect_ratio: Desired aspect ratio (width / height) of output.
crop_proportion: Proportion of image to retain along the less-cropped side.
Returns:
crop_height: Height of image after cropping.
crop_width: Width of image after cropping.
"""
image_width_float = tf.cast(image_width, tf.float32)
image_height_float = tf.cast(image_height, tf.float32)
def _requested_aspect_ratio_wider_than_image():
crop_height = tf.cast(tf.math.rint(
crop_proportion / aspect_ratio * image_width_float), tf.int32)
crop_width = tf.cast(tf.math.rint(
crop_proportion * image_width_float), tf.int32)
return crop_height, crop_width
def _image_wider_than_requested_aspect_ratio():
crop_height = tf.cast(
tf.math.rint(crop_proportion * image_height_float), tf.int32)
crop_width = tf.cast(tf.math.rint(
crop_proportion * aspect_ratio *
image_height_float), tf.int32)
return crop_height, crop_width
return tf.cond(
aspect_ratio > image_width_float / image_height_float,
_requested_aspect_ratio_wider_than_image,
_image_wider_than_requested_aspect_ratio)
def center_crop(image, height, width, crop_proportion):
"""Crops to center of image and rescales to desired size.
Args:
image: Image Tensor to crop.
height: Height of image to be cropped.
width: Width of image to be cropped.
crop_proportion: Proportion of image to retain along the less-cropped side.
Returns:
A `height` x `width` x channels Tensor holding a central crop of `image`.
"""
shape = tf.shape(image)
image_height = shape[0]
image_width = shape[1]
crop_height, crop_width = _compute_crop_shape(
image_height, image_width, height / width, crop_proportion)
offset_height = ((image_height - crop_height) + 1) // 2
offset_width = ((image_width - crop_width) + 1) // 2
image = tf.image.crop_to_bounding_box(
image, offset_height, offset_width, crop_height, crop_width)
image = tf.image.resize(image, [height, width],
method=tf.image.ResizeMethod.BICUBIC)
return image
def preprocess_for_eval(image, height, width):
"""Preprocesses the given image for evaluation.
Args:
image: `Tensor` representing an image of arbitrary size.
height: Height of output image.
width: Width of output image.
Returns:
A preprocessed image `Tensor`.
"""
image = center_crop(image, height, width, crop_proportion=CROP_PROPORTION)
image = tf.reshape(image, [height, width, 3])
image = tf.clip_by_value(image, 0., 1.)
return image
def preprocess_image(features):
"""Preprocesses the given image.
Args:
image: `Tensor` representing an image of arbitrary size.
Returns:
A preprocessed image `Tensor` of range [0, 1].
"""
image = features["image"]
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = preprocess_for_eval(image, HEIGHT, WIDTH)
features["image"] = image
return features
###Output
_____no_output_____
###Markdown
Load dataset
###Code
BATCH_SIZE = 50
ds = tfds.load(name='imagenet2012', split='validation').map(preprocess_image).batch(BATCH_SIZE).prefetch(1)
def eval(model_path, log=False):
if log:
print("Loading model from %s" % model_path)
model = tf.saved_model.load(model_path)
if log:
print("Loaded model!")
top_1_accuracy = tf.keras.metrics.Accuracy('top_1_accuracy')
for i, features in enumerate(ds):
logits = model(features["image"], trainable=False)["logits_sup"]
top_1_accuracy.update_state(features["label"], tf.argmax(logits, axis=-1))
if log and (i + 1) % 50 == 0:
print("Finished %d examples" % ((i + 1) * BATCH_SIZE))
return top_1_accuracy.result().numpy().astype(float)
###Output
_____no_output_____
###Markdown
SimCLR v2 Finetuned models
###Code
path_pat = "gs://simclr-checkpoints-tf2/simclrv2/finetuned_{pct}pct/r{depth}_{width_multiplier}x_sk{sk}/saved_model/"
results = {}
for resnet_depth in (50, 101, 152):
for width_multiplier in (1, 2):
for sk in (0, 1):
for pct in (1, 10, 100):
path = path_pat.format(pct=pct, depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
resnet_depth = 152
width_multiplier = 3
sk = 1
for pct in (1, 10, 100):
path = path_pat.format(pct=pct, depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
###Output
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r50_1x_sk0/saved_model/
Top-1: 58.0
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r50_1x_sk0/saved_model/
Top-1: 68.4
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r50_1x_sk0/saved_model/
Top-1: 76.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r50_1x_sk1/saved_model/
Top-1: 64.5
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r50_1x_sk1/saved_model/
Top-1: 72.0
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r50_1x_sk1/saved_model/
Top-1: 78.6
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r50_2x_sk0/saved_model/
Top-1: 66.2
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r50_2x_sk0/saved_model/
Top-1: 73.9
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r50_2x_sk0/saved_model/
Top-1: 79.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r50_2x_sk1/saved_model/
Top-1: 70.7
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r50_2x_sk1/saved_model/
Top-1: 77.0
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r50_2x_sk1/saved_model/
Top-1: 81.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r101_1x_sk0/saved_model/
Top-1: 62.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r101_1x_sk0/saved_model/
Top-1: 71.5
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r101_1x_sk0/saved_model/
Top-1: 78.2
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r101_1x_sk1/saved_model/
Top-1: 68.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r101_1x_sk1/saved_model/
Top-1: 75.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r101_1x_sk1/saved_model/
Top-1: 80.7
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r101_2x_sk0/saved_model/
Top-1: 69.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r101_2x_sk0/saved_model/
Top-1: 75.9
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r101_2x_sk0/saved_model/
Top-1: 80.8
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r101_2x_sk1/saved_model/
Top-1: 73.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r101_2x_sk1/saved_model/
Top-1: 78.8
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r101_2x_sk1/saved_model/
Top-1: 82.4
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r152_1x_sk0/saved_model/
Top-1: 64.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r152_1x_sk0/saved_model/
Top-1: 73.0
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r152_1x_sk0/saved_model/
Top-1: 79.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r152_1x_sk1/saved_model/
Top-1: 70.0
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r152_1x_sk1/saved_model/
Top-1: 76.4
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r152_1x_sk1/saved_model/
Top-1: 81.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r152_2x_sk0/saved_model/
Top-1: 70.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r152_2x_sk0/saved_model/
Top-1: 76.6
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r152_2x_sk0/saved_model/
Top-1: 81.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r152_2x_sk1/saved_model/
Top-1: 74.2
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r152_2x_sk1/saved_model/
Top-1: 79.4
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r152_2x_sk1/saved_model/
Top-1: 82.9
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r152_3x_sk1/saved_model/
Top-1: 74.9
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r152_3x_sk1/saved_model/
Top-1: 80.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r152_3x_sk1/saved_model/
Top-1: 83.1
###Markdown
Supervised
###Code
path_pat = "gs://simclr-checkpoints-tf2/simclrv2/supervised/r{depth}_{width_multiplier}x_sk{sk}/saved_model/"
results = {}
for resnet_depth in (50, 101, 152):
for width_multiplier in (1, 2):
for sk in (0, 1):
path = path_pat.format(depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
resnet_depth = 152
width_multiplier = 3
sk = 1
path = path_pat.format(depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
###Output
gs://simclr-checkpoints-tf2/simclrv2/supervised/r50_1x_sk0/saved_model/
Top-1: 76.6
gs://simclr-checkpoints-tf2/simclrv2/supervised/r50_1x_sk1/saved_model/
Top-1: 78.5
gs://simclr-checkpoints-tf2/simclrv2/supervised/r50_2x_sk0/saved_model/
Top-1: 77.8
gs://simclr-checkpoints-tf2/simclrv2/supervised/r50_2x_sk1/saved_model/
Top-1: 79.3
gs://simclr-checkpoints-tf2/simclrv2/supervised/r101_1x_sk0/saved_model/
Top-1: 78.0
gs://simclr-checkpoints-tf2/simclrv2/supervised/r101_1x_sk1/saved_model/
Top-1: 79.6
gs://simclr-checkpoints-tf2/simclrv2/supervised/r101_2x_sk0/saved_model/
Top-1: 78.8
gs://simclr-checkpoints-tf2/simclrv2/supervised/r101_2x_sk1/saved_model/
Top-1: 80.1
gs://simclr-checkpoints-tf2/simclrv2/supervised/r152_1x_sk0/saved_model/
Top-1: 78.2
gs://simclr-checkpoints-tf2/simclrv2/supervised/r152_1x_sk1/saved_model/
Top-1: 80.0
gs://simclr-checkpoints-tf2/simclrv2/supervised/r152_2x_sk0/saved_model/
Top-1: 79.1
gs://simclr-checkpoints-tf2/simclrv2/supervised/r152_2x_sk1/saved_model/
Top-1: 80.4
gs://simclr-checkpoints-tf2/simclrv2/supervised/r152_3x_sk1/saved_model/
Top-1: 80.5
###Markdown
Pretrained with linear eval
###Code
path_pat = "gs://simclr-checkpoints-tf2/simclrv2/pretrained/r{depth}_{width_multiplier}x_sk{sk}/saved_model/"
results = {}
for resnet_depth in (50, 101, 152):
for width_multiplier in (1, 2):
for sk in (0, 1):
path = path_pat.format(depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
resnet_depth = 152
width_multiplier = 3
sk = 1
path = path_pat.format(depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
###Output
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r50_1x_sk0/saved_model/
Top-1: 71.7
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r50_1x_sk1/saved_model/
Top-1: 74.6
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r50_2x_sk0/saved_model/
Top-1: 75.4
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r50_2x_sk1/saved_model/
Top-1: 77.8
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r101_1x_sk0/saved_model/
Top-1: 73.7
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r101_1x_sk1/saved_model/
Top-1: 76.3
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r101_2x_sk0/saved_model/
Top-1: 77.0
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r101_2x_sk1/saved_model/
Top-1: 79.1
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r152_1x_sk0/saved_model/
Top-1: 74.6
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r152_1x_sk1/saved_model/
Top-1: 77.3
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r152_2x_sk0/saved_model/
Top-1: 77.4
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r152_2x_sk1/saved_model/
Top-1: 79.3
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r152_3x_sk1/saved_model/
Top-1: 79.9
###Markdown
SimCLR v1 Finetuned
###Code
path_pat = "gs://simclr-checkpoints-tf2/simclrv1/finetune_{pct}pct/{width_multiplier}x/saved_model/"
results = {}
resnet_depth = 50
for pct in (10, 100):
for width_multiplier in (1, 2, 4):
path = path_pat.format(pct=pct, width_multiplier=width_multiplier)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
###Output
gs://simclr-checkpoints-tf2/simclrv1/finetune_10pct/1x/saved_model/
Top-1: 65.8
gs://simclr-checkpoints-tf2/simclrv1/finetune_10pct/2x/saved_model/
Top-1: 71.6
gs://simclr-checkpoints-tf2/simclrv1/finetune_10pct/4x/saved_model/
Top-1: 74.5
gs://simclr-checkpoints-tf2/simclrv1/finetune_100pct/1x/saved_model/
Top-1: 75.6
gs://simclr-checkpoints-tf2/simclrv1/finetune_100pct/2x/saved_model/
Top-1: 79.2
gs://simclr-checkpoints-tf2/simclrv1/finetune_100pct/4x/saved_model/
Top-1: 80.8
###Markdown
Pretrained with linear eval
###Code
path_pat = "gs://simclr-checkpoints-tf2/simclrv1/pretrain/{width_multiplier}x/saved_model/"
results = {}
resnet_depth = 50
for width_multiplier in (1, 2, 4):
path = path_pat.format(width_multiplier=width_multiplier)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
###Output
gs://simclr-checkpoints-tf2/simclrv1/pretrain/1x/saved_model/
Top-1: 69.0
gs://simclr-checkpoints-tf2/simclrv1/pretrain/2x/saved_model/
Top-1: 74.2
gs://simclr-checkpoints-tf2/simclrv1/pretrain/4x/saved_model/
Top-1: 76.6
###Markdown
Note: The evals here have been run on GPU so they may not exactly match the results reported in the paper which were run on TPUs, however the difference in accuracy should not be more than 0.1%. Setup
###Code
import tensorflow as tf
import tensorflow_datasets as tfds
CROP_PROPORTION = 0.875 # Standard for ImageNet.
HEIGHT = 224
WIDTH = 224
def _compute_crop_shape(
image_height, image_width, aspect_ratio, crop_proportion):
"""Compute aspect ratio-preserving shape for central crop.
The resulting shape retains `crop_proportion` along one side and a proportion
less than or equal to `crop_proportion` along the other side.
Args:
image_height: Height of image to be cropped.
image_width: Width of image to be cropped.
aspect_ratio: Desired aspect ratio (width / height) of output.
crop_proportion: Proportion of image to retain along the less-cropped side.
Returns:
crop_height: Height of image after cropping.
crop_width: Width of image after cropping.
"""
image_width_float = tf.cast(image_width, tf.float32)
image_height_float = tf.cast(image_height, tf.float32)
def _requested_aspect_ratio_wider_than_image():
crop_height = tf.cast(tf.math.rint(
crop_proportion / aspect_ratio * image_width_float), tf.int32)
crop_width = tf.cast(tf.math.rint(
crop_proportion * image_width_float), tf.int32)
return crop_height, crop_width
def _image_wider_than_requested_aspect_ratio():
crop_height = tf.cast(
tf.math.rint(crop_proportion * image_height_float), tf.int32)
crop_width = tf.cast(tf.math.rint(
crop_proportion * aspect_ratio *
image_height_float), tf.int32)
return crop_height, crop_width
return tf.cond(
aspect_ratio > image_width_float / image_height_float,
_requested_aspect_ratio_wider_than_image,
_image_wider_than_requested_aspect_ratio)
def center_crop(image, height, width, crop_proportion):
"""Crops to center of image and rescales to desired size.
Args:
image: Image Tensor to crop.
height: Height of image to be cropped.
width: Width of image to be cropped.
crop_proportion: Proportion of image to retain along the less-cropped side.
Returns:
A `height` x `width` x channels Tensor holding a central crop of `image`.
"""
shape = tf.shape(image)
image_height = shape[0]
image_width = shape[1]
crop_height, crop_width = _compute_crop_shape(
image_height, image_width, height / width, crop_proportion)
offset_height = ((image_height - crop_height) + 1) // 2
offset_width = ((image_width - crop_width) + 1) // 2
image = tf.image.crop_to_bounding_box(
image, offset_height, offset_width, crop_height, crop_width)
image = tf.image.resize(image, [height, width],
method=tf.image.ResizeMethod.BICUBIC)
return image
def preprocess_for_eval(image, height, width):
"""Preprocesses the given image for evaluation.
Args:
image: `Tensor` representing an image of arbitrary size.
height: Height of output image.
width: Width of output image.
Returns:
A preprocessed image `Tensor`.
"""
image = center_crop(image, height, width, crop_proportion=CROP_PROPORTION)
image = tf.reshape(image, [height, width, 3])
image = tf.clip_by_value(image, 0., 1.)
return image
def preprocess_image(features):
"""Preprocesses the given image.
Args:
image: `Tensor` representing an image of arbitrary size.
Returns:
A preprocessed image `Tensor` of range [0, 1].
"""
image = features["image"]
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = preprocess_for_eval(image, HEIGHT, WIDTH)
features["image"] = image
return features
###Output
_____no_output_____
###Markdown
Load dataset
###Code
BATCH_SIZE = 50
ds = tfds.load(name='imagenet2012', split='validation').map(preprocess_image).batch(BATCH_SIZE).prefetch(1)
def eval(model_path, log=False):
if log:
print("Loading model from %s" % model_path)
model = tf.saved_model.load(model_path)
if log:
print("Loaded model!")
top_1_accuracy = tf.keras.metrics.Accuracy('top_1_accuracy')
for i, features in enumerate(ds):
logits = model(features["image"], trainable=False)["logits_sup"]
top_1_accuracy.update_state(features["label"], tf.argmax(logits, axis=-1))
if log and (i + 1) % 50 == 0:
print("Finished %d examples" % ((i + 1) * BATCH_SIZE))
return top_1_accuracy.result().numpy().astype(float)
###Output
_____no_output_____
###Markdown
SimCLR v2 Finetuned models
###Code
path_pat = "gs://simclr-checkpoints-tf2/simclrv2/finetuned_{pct}pct/r{depth}_{width_multiplier}x_sk{sk}/saved_model/"
results = {}
for resnet_depth in (50, 101, 152):
for width_multiplier in (1, 2):
for sk in (0, 1):
for pct in (1, 10, 100):
path = path_pat.format(pct=pct, depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
resnet_depth = 152
width_multiplier = 3
sk = 1
for pct in (1, 10, 100):
path = path_pat.format(pct=pct, depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
###Output
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r50_1x_sk0/saved_model/
Top-1: 58.0
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r50_1x_sk0/saved_model/
Top-1: 68.4
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r50_1x_sk0/saved_model/
Top-1: 76.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r50_1x_sk1/saved_model/
Top-1: 64.5
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r50_1x_sk1/saved_model/
Top-1: 72.0
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r50_1x_sk1/saved_model/
Top-1: 78.6
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r50_2x_sk0/saved_model/
Top-1: 66.2
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r50_2x_sk0/saved_model/
Top-1: 73.9
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r50_2x_sk0/saved_model/
Top-1: 79.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r50_2x_sk1/saved_model/
Top-1: 70.7
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r50_2x_sk1/saved_model/
Top-1: 77.0
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r50_2x_sk1/saved_model/
Top-1: 81.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r101_1x_sk0/saved_model/
Top-1: 62.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r101_1x_sk0/saved_model/
Top-1: 71.5
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r101_1x_sk0/saved_model/
Top-1: 78.2
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r101_1x_sk1/saved_model/
Top-1: 68.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r101_1x_sk1/saved_model/
Top-1: 75.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r101_1x_sk1/saved_model/
Top-1: 80.7
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r101_2x_sk0/saved_model/
Top-1: 69.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r101_2x_sk0/saved_model/
Top-1: 75.9
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r101_2x_sk0/saved_model/
Top-1: 80.8
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r101_2x_sk1/saved_model/
Top-1: 73.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r101_2x_sk1/saved_model/
Top-1: 78.8
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r101_2x_sk1/saved_model/
Top-1: 82.4
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r152_1x_sk0/saved_model/
Top-1: 64.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r152_1x_sk0/saved_model/
Top-1: 73.0
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r152_1x_sk0/saved_model/
Top-1: 79.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r152_1x_sk1/saved_model/
Top-1: 70.0
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r152_1x_sk1/saved_model/
Top-1: 76.4
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r152_1x_sk1/saved_model/
Top-1: 81.3
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r152_2x_sk0/saved_model/
Top-1: 70.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r152_2x_sk0/saved_model/
Top-1: 76.6
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r152_2x_sk0/saved_model/
Top-1: 81.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r152_2x_sk1/saved_model/
Top-1: 74.2
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r152_2x_sk1/saved_model/
Top-1: 79.4
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r152_2x_sk1/saved_model/
Top-1: 82.9
gs://simclr-checkpoints-tf2/simclrv2/finetuned_1pct/r152_3x_sk1/saved_model/
Top-1: 74.9
gs://simclr-checkpoints-tf2/simclrv2/finetuned_10pct/r152_3x_sk1/saved_model/
Top-1: 80.1
gs://simclr-checkpoints-tf2/simclrv2/finetuned_100pct/r152_3x_sk1/saved_model/
Top-1: 83.1
###Markdown
Supervised
###Code
path_pat = "gs://simclr-checkpoints-tf2/simclrv2/supervised/r{depth}_{width_multiplier}x_sk{sk}/saved_model/"
results = {}
for resnet_depth in (50, 101, 152):
for width_multiplier in (1, 2):
for sk in (0, 1):
path = path_pat.format(depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
resnet_depth = 152
width_multiplier = 3
sk = 1
path = path_pat.format(depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
###Output
gs://simclr-checkpoints-tf2/simclrv2/supervised/r50_1x_sk0/saved_model/
Top-1: 76.6
gs://simclr-checkpoints-tf2/simclrv2/supervised/r50_1x_sk1/saved_model/
Top-1: 78.5
gs://simclr-checkpoints-tf2/simclrv2/supervised/r50_2x_sk0/saved_model/
Top-1: 77.8
gs://simclr-checkpoints-tf2/simclrv2/supervised/r50_2x_sk1/saved_model/
Top-1: 79.3
gs://simclr-checkpoints-tf2/simclrv2/supervised/r101_1x_sk0/saved_model/
Top-1: 78.0
gs://simclr-checkpoints-tf2/simclrv2/supervised/r101_1x_sk1/saved_model/
Top-1: 79.6
gs://simclr-checkpoints-tf2/simclrv2/supervised/r101_2x_sk0/saved_model/
Top-1: 78.8
gs://simclr-checkpoints-tf2/simclrv2/supervised/r101_2x_sk1/saved_model/
Top-1: 80.1
gs://simclr-checkpoints-tf2/simclrv2/supervised/r152_1x_sk0/saved_model/
Top-1: 78.2
gs://simclr-checkpoints-tf2/simclrv2/supervised/r152_1x_sk1/saved_model/
Top-1: 80.0
gs://simclr-checkpoints-tf2/simclrv2/supervised/r152_2x_sk0/saved_model/
Top-1: 79.1
gs://simclr-checkpoints-tf2/simclrv2/supervised/r152_2x_sk1/saved_model/
Top-1: 80.4
gs://simclr-checkpoints-tf2/simclrv2/supervised/r152_3x_sk1/saved_model/
Top-1: 80.5
###Markdown
Pretrained with linear eval
###Code
path_pat = "gs://simclr-checkpoints-tf2/simclrv2/pretrained/r{depth}_{width_multiplier}x_sk{sk}/saved_model/"
results = {}
for resnet_depth in (50, 101, 152):
for width_multiplier in (1, 2):
for sk in (0, 1):
path = path_pat.format(depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
resnet_depth = 152
width_multiplier = 3
sk = 1
path = path_pat.format(depth=resnet_depth, width_multiplier=width_multiplier, sk=sk)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
###Output
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r50_1x_sk0/saved_model/
Top-1: 71.7
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r50_1x_sk1/saved_model/
Top-1: 74.6
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r50_2x_sk0/saved_model/
Top-1: 75.4
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r50_2x_sk1/saved_model/
Top-1: 77.8
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r101_1x_sk0/saved_model/
Top-1: 73.7
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r101_1x_sk1/saved_model/
Top-1: 76.3
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r101_2x_sk0/saved_model/
Top-1: 77.0
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r101_2x_sk1/saved_model/
Top-1: 79.1
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r152_1x_sk0/saved_model/
Top-1: 74.6
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r152_1x_sk1/saved_model/
Top-1: 77.3
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r152_2x_sk0/saved_model/
Top-1: 77.4
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r152_2x_sk1/saved_model/
Top-1: 79.3
gs://simclr-checkpoints-tf2/simclrv2/pretrained/r152_3x_sk1/saved_model/
Top-1: 79.9
###Markdown
SimCLR v1 Finetuned
###Code
path_pat = "gs://simclr-checkpoints-tf2/simclrv1/finetune_{pct}pct/{width_multiplier}x/saved_model/"
results = {}
resnet_depth = 50
for pct in (10, 100):
for width_multiplier in (1, 2, 4):
path = path_pat.format(pct=pct, width_multiplier=width_multiplier)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
###Output
gs://simclr-checkpoints-tf2/simclrv1/finetune_10pct/1x/saved_model/
Top-1: 65.8
gs://simclr-checkpoints-tf2/simclrv1/finetune_10pct/2x/saved_model/
Top-1: 71.6
gs://simclr-checkpoints-tf2/simclrv1/finetune_10pct/4x/saved_model/
Top-1: 74.5
gs://simclr-checkpoints-tf2/simclrv1/finetune_100pct/1x/saved_model/
Top-1: 75.6
gs://simclr-checkpoints-tf2/simclrv1/finetune_100pct/2x/saved_model/
Top-1: 79.2
gs://simclr-checkpoints-tf2/simclrv1/finetune_100pct/4x/saved_model/
Top-1: 80.8
###Markdown
Pretrained with linear eval
###Code
path_pat = "gs://simclr-checkpoints-tf2/simclrv1/pretrain/{width_multiplier}x/saved_model/"
results = {}
resnet_depth = 50
for width_multiplier in (1, 2, 4):
path = path_pat.format(width_multiplier=width_multiplier)
results[path] = eval(path)
print(path)
print("Top-1: %.1f" % (results[path] * 100))
###Output
gs://simclr-checkpoints-tf2/simclrv1/pretrain/1x/saved_model/
Top-1: 69.0
gs://simclr-checkpoints-tf2/simclrv1/pretrain/2x/saved_model/
Top-1: 74.2
gs://simclr-checkpoints-tf2/simclrv1/pretrain/4x/saved_model/
Top-1: 76.6
|
notebooks/MAST/Kepler/Kepler_DVT/kepler_dvt.ipynb | ###Markdown
Read and Plot A Kepler Data Validation Timeseries FileThis notebook tutorial demonstrates how to load and plot the contents of a Kepler data validation timeseries (dvt) file. We will plot the flux timeseries contained within the file.
###Code
%matplotlib inline
from astropy.io import fits
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
IntroductionKepler does a search of the postage-stamp data, taken at either short (60-second) or long (30-minute) cadence. For every signal it identifies with its Transit Planet Search (TPS) module, it creates something called a Threshold Crossing Event (TCE). TCEs are periodic signals that exceed a nominal signal-to-noise requirement. Some are consistent with transiting planets, others are eclipsing binaries, and others are more consistent with variable stars or noise in the data. The Data Validation (DV) module of the pipeline produces a set of products that can help validate the quality of a TCE. The DV products include a time series file of the flattened light curve that was searched, where the light curves from different Quarters are stitched together to mitigate offsets between them, and relevant statistics for each signal (dvt.fits). DV reports that consist of a few diagnostic plots and relevant statistics (dvs.pdf for individual signals, dvr.pdf for all signals found in the TIC object), and an xml file (dvr.xml) that contains the results of the planet transit fit, are also created and provided. We will be exploring a dvt.fits file in this tutorial.This tutorial will refer to a couple Kepler-related terms that we define here.* Quarter = Kepler rotated 90 degrees once every 3 months. Each rotation resulted in a separate collection sequence known as a "Quarter". While the field-of-view did not change, other details and parameters did, including where some targets fell on the detectors and even whether they got data at all depending on where they were located. Most Quarters contain a single long cadence file, and a few short cadence sequences, but there are some exceptions.* KIC ID = "Kepler Input Catalog" identifier, used to refer to different targets in the Kepler field-of-view. With some exceptions, each objects in the Kepler field-of-view has a single, unique KIC ID. Files are often named based on the KIC ID the data are for.* HDU = Header Data Unit. A FITS file is made up of HDUs that contain data and metadata relating to the file. The first HDU is called the primary HDU, and anything that follows is considered an "extension", e.g., "the first FITS extension", "the second FITS extension", etc.* BJD = Barycentric Julian Date, the Julian Date that has been corrected for differences in the Earth's position with respect to the Solar System center of mass.* BKJD = Kepler Barycentric Julian Date, the timestamp measured in BJD, but offset by 2454833.0. I.e., BKJD = BJD - 2454833.0* Cadence = The interval between flux measurements, nominally ~1 minute for short cadence and ~30 minutes for long cadence.* TCE = Threshold Crossing Event, periodic signals found by the Kepler pipeline that exceed a nominal signal-to-noise ratio. Obtaining The DVT Series FileWe will read the data validation file of KIC 11446443 (also known as TrES-2, which has at least one planet, TrES-2 b) using the MAST URL location. So that we can get started with understanding the file contents without reviewing how to automatically search for and retrieve Kepler files, we won't show how to search and retrieve Kepler DVT files in this tutorial.
###Code
# For the purposes of this tutorial, we just know the MAST URL location of the file we want to examine.
dvt_file = "https://archive.stsci.edu/missions/kepler/dv_files/0114/011446443/kplr011446443-20160128150956_dvt.fits"
###Output
_____no_output_____
###Markdown
Understanding The DVT File StructureThe DVT FITS file consits of a primary HDU with metadata stored in the header, and one FITS extension HDU per TCE found in the light curve of the specified KIC ID. These extensions contain the detrended flux time series phased to the orbital period of the signal, stored as a binary FITS table. The last extension HDU always contains some additional statistics about the search, also stored in a binary FITS table. Let's examine the structure of the FITS file using the astropy.fits `info` function, which shows the FITS file format in more detail.
###Code
fits.info(dvt_file)
###Output
_____no_output_____
###Markdown
In this case, KIC 011446443 has a single TCE identified, and the "statistics" extension in the last HDU, as expected. Let's examine the TCE extension in more detail using the astropy.fits `getdata` function and see what columns are available.
###Code
fits.getdata(dvt_file, ext=1).columns
###Output
_____no_output_____
###Markdown
In addition to the timestamps in BKJD format, there is a column containing the times phased to the orbital period of the signal, and there are several columns of fluxes. LC_INIT is the "unwhitened" fluxes, LC_WHITE are the "whitened" fluxes. The MODEL_INIT and MODEL_WHITE fluxes are the corresponding model fluxes based on the best fit to the signal. Plotting The Timeseries Fluxes.Let's open the FITS file and extract some metadata from the headers, and also store some of the columns from the TCE signal for use later when we plot the results.
###Code
with fits.open(dvt_file, mode="readonly") as hdulist:
# Extract stellar parameters from the primary header. We'll get the effective temperature, surface gravity,
# and Kepler magnitude.
star_teff = hdulist[0].header['TEFF']
star_logg = hdulist[0].header['LOGG']
star_tmag = hdulist[0].header['KEPMAG']
# Extract some of the fit parameters for the first TCE. These are stored in the FITS header of the first
# extension.
period = hdulist[1].header['TPERIOD']
duration = hdulist[1].header['TDUR']
epoch = hdulist[1].header['TEPOCH']
depth = hdulist[1].header['TDEPTH']
# Extract some of the columns of interest for the TCE signal. These are stored in the binary FITS table
# in the first extension. We'll extract the timestamps in BKJD, phase, initial fluxes, and corresponding
# model fluxes.
times = hdulist[1].data['TIME']
phases = hdulist[1].data['PHASE']
fluxes_init = hdulist[1].data['LC_INIT']
model_fluxes_init = hdulist[1].data['MODEL_INIT']
###Output
_____no_output_____
###Markdown
Let's make a plot of the detrended fluxes and model fluxes vs. orbital phase.
###Code
# First sort the phase and flux arrays by phase so we can draw the connecting lines between points.
sort_indexes = np.argsort(phases)
# Start figure and axis.
fig, ax = plt.subplots(figsize=(12,4))
# Plot the detrended fluxes as black circles. We will plot them in sorted order.
ax.plot(phases[sort_indexes], fluxes_init[sort_indexes], 'ko',
markersize=2)
# Plot the model fluxes as a red line. We will plot them in sorted order so the line connects between points cleanly.
ax.plot(phases[sort_indexes], model_fluxes_init[sort_indexes], '-r')
# Let's label the axes and define a title for the figure.
fig.suptitle('KIC 11446443 - Folded Light Curve And Transit Model.')
ax.set_ylabel("Flux (relative)")
ax.set_xlabel("Orbital Phase")
# Let's add some text in the top-right containing some of the fit parameters.
plt.text(0.3, 0.005, "Period = {0:10.6f} days".format(period))
plt.text(0.3, 0.003, "Duration = {0:10.6f} hours".format(duration))
plt.text(0.3, 0.001, "Depth = {0:10.6f} ppm".format(depth))
plt.text(0.95, 0.005, "Star Teff = {0:10.6f} K".format(star_teff))
plt.text(0.95, 0.003, "Star log(g) = {0:10.6f}".format(star_logg))
plt.show()
###Output
_____no_output_____
###Markdown
Examining The Statistics ExtensionThe statistics extension HDU contains the Single Event Statistics ("SES") correlation time series and the SES normalization time series for each of the pipeline's search durations. For this target, since there is only one TCE, the statistics extension is in HDU extension 2. For more information, see [Tenebaum et al. 2012, ApJS, 199, 24](http://adsabs.harvard.edu/abs/2012ApJS..199...24T) and [Twicken et al. 2018, PASP, 130, 6](http://adsabs.harvard.edu/abs/2018PASP..130f4502T) for a description of the DV statistics. These statistics are used to calculate the Combined Differential Photometric Precision ("CDPP") time series ([Gilliland et al. 2011, ApJS, 197, 6](http://adsabs.harvard.edu/abs/2011ApJS..197....6G)).
###Code
fits.getdata(dvt_file, ext=2).columns
###Output
_____no_output_____ |
Notebooks/04 K-Means Clustering.ipynb | ###Markdown
Importing Libraries
###Code
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, split, explode,substring, length, udf
from pyspark.sql.types import *
from pyspark.sql import Row
from itertools import cycle
from pyspark.ml.regression import LinearRegression
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from pyspark.sql import functions as F
from pyspark.sql import types as T
###Output
_____no_output_____
###Markdown
Creating Spark session
###Code
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
###Output
_____no_output_____
###Markdown
Loading data from the previous module
###Code
SQLQUERY = """
SELECT CATEGORY, Slope
, Intercept
, R2
, Cast(Prediction as Decimal(12)) as Prediction
FROM summaryDF
"""
# ORDER BY YEAR DESC, TOTAL DESC
# Category,R2, Cast(Prediction as Decimal(12)) as Prediction
#ORDER BY Pediction Desc
regressionDF = spark.sql(SQLQUERY)
regressionDF.show(5,truncate = False)
###Output
+----------------------------------+--------------------+----------------------+------------------+-----------+
|CATEGORY |Slope |Intercept |R2 |Prediction |
+----------------------------------+--------------------+----------------------+------------------+-----------+
|Personal Finance |3.095318445488712E8 |-6.208465795394869E11 |0.5257188832329653|4407746449 |
|Reviews and Recommendations |9.384350510535592E7 |-1.8807660100807297E11|0.6748252913534014|1487279305 |
|Health Care |1.0898642181609554E9|-2.1792508864081628E12|0.8747611056595744|22274834277|
|Application Performance Monitoring|1.4638639037091857E8|-2.9369013932035425E11|0.5668998218851391|2010369229 |
|Credit Cards |6.334192286089984E7 |-1.2700968121996451E11|0.5027780036421108|941002959 |
+----------------------------------+--------------------+----------------------+------------------+-----------+
only showing top 5 rows
###Markdown
Vectorization of the features
###Code
from pyspark.ml.feature import VectorAssembler
from pyspark.mllib.util import MLUtils
from pyspark.ml.feature import Normalizer
vectorAssembler = VectorAssembler(inputCols = [ 'R2' ], outputCol = 'features')
featureDF = vectorAssembler.transform(regressionDF).select('CATEGORY','Slope','R2', col('Prediction').alias('Projection'), 'features')
featureDF.show(5)
###Output
+--------------------+--------------------+------------------+-----------+--------------------+
| CATEGORY| Slope| R2| Projection| features|
+--------------------+--------------------+------------------+-----------+--------------------+
| Personal Finance| 3.095318445488712E8|0.5257188832329653| 4407746449|[0.5257188832329653]|
|Reviews and Recom...| 9.384350510535592E7|0.6748252913534014| 1487279305|[0.6748252913534014]|
| Health Care|1.0898642181609554E9|0.8747611056595744|22274834277|[0.8747611056595744]|
|Application Perfo...|1.4638639037091857E8|0.5668998218851391| 2010369229|[0.5668998218851391]|
| Credit Cards| 6.334192286089984E7|0.5027780036421108| 941002959|[0.5027780036421108]|
+--------------------+--------------------+------------------+-----------+--------------------+
only showing top 5 rows
###Markdown
KMeans Model training
###Code
from pyspark.ml.clustering import KMeans
from pyspark.ml.evaluation import ClusteringEvaluator
# Trains a k-means model.
kmeans = KMeans().setK(3).setSeed(1)
model = kmeans.fit(featureDF)
# Make predictions
predictions = model.transform(featureDF)
###Output
_____no_output_____
###Markdown
Clustering effiecny using Silhouette Score
###Code
# Evaluate clustering by computing Silhouette score
evaluator = ClusteringEvaluator()
silhouette = evaluator.evaluate(predictions)
print("Silhouette with squared euclidean distance = " + str(silhouette))
###Output
Silhouette with squared euclidean distance = 0.7790661080316511
###Markdown
Centorids of the new stereotype
###Code
centers = model.clusterCenters()
for center in centers:
print(str(float(center[0])))
#print(str(float(center[1])))
###Output
0.5456033326212057
0.6467024333720977
0.7785377496273099
###Markdown
Dataset with clustering information
###Code
predictions.show(5)
###Output
+--------------------+--------------------+------------------+-----------+--------------------+----------+
| CATEGORY| Slope| R2| Projection| features|prediction|
+--------------------+--------------------+------------------+-----------+--------------------+----------+
| Personal Finance| 3.095318445488712E8|0.5257188832329653| 4407746449|[0.5257188832329653]| 0|
|Reviews and Recom...| 9.384350510535592E7|0.6748252913534014| 1487279305|[0.6748252913534014]| 1|
| Health Care|1.0898642181609554E9|0.8747611056595744|22274834277|[0.8747611056595744]| 2|
|Application Perfo...|1.4638639037091857E8|0.5668998218851391| 2010369229|[0.5668998218851391]| 0|
| Credit Cards| 6.334192286089984E7|0.5027780036421108| 941002959|[0.5027780036421108]| 0|
+--------------------+--------------------+------------------+-----------+--------------------+----------+
only showing top 5 rows
###Markdown
Converion to array for matplotlib
###Code
slope = [row.Slope for row in predictions.collect()]
projected = [row.R2 for row in predictions.collect()]
color = [row.prediction for row in predictions.collect()]
###Output
_____no_output_____
###Markdown
Clustered Data Representation
###Code
plt.figure(figsize=(8,6))
plt.scatter(slope, projected, c=color)
plt.xlabel('Investement -->')
plt.ylabel('R2 score -->')
plt.savefig('KMeansCluster.png' , pi=300, quality =95 )
plt.show()
###Output
_____no_output_____
###Markdown
Model Details
###Code
display(model,predictions)
###Output
_____no_output_____
###Markdown
Grouping the cluster data to find the distribution
###Code
clusterData = predictions.groupBy('prediction').sum('projection').select('prediction',col("sum(projection)").alias("total"))
clusterData.show()
###Output
+----------+------------+
|prediction| total|
+----------+------------+
| 1|257535294020|
| 2| 91792679842|
| 0|160527453653|
+----------+------------+
###Markdown
Assigning the cluster names based on the data analysis
###Code
clusters = ['Fluctuating','Moderate risk', 'Stable']
clusterNames = [clusters[row.prediction] for row in clusterData.collect()]
clusterShare = [row.total for row in clusterData.collect()]
clusterShare
###Output
_____no_output_____
###Markdown
Distribution chart of the new stereotypes
###Code
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.axis('equal')
langs = clusterNames
students = clusterShare
ax.pie(students, labels = langs,autopct='%1.2f%%')
plt.savefig('ClusterShare.png' , pi=300, quality =95 )
plt.show()
###Output
_____no_output_____ |
sec10.ipynb | ###Markdown
Table of Contents1 Prepare1.1 Import Library1.2 Fixed Point1.3 Newton's Method1.4 Secant Method1.5 Quasi-Newton Method1.6 Steepest Descent2 Run 2.1 1-Dim2.1.1 Fixed Point2.1.2 Newton's method2.1.3 Secant Method2.2 Multi-Dim2.2.1 Fixed Point2.2.2 Newton's method2.2.3 Quasi-Newton method2.2.4 Steepest Descent You can directly go to **Chapter 2 (Run)** to look experiment results. Prepare Import Library
###Code
import numpy as np
from numpy import linalg
from abc import abstractmethod
import pandas as pd
import math
pd.options.display.float_format = '{:,.8f}'.format
np.set_printoptions(suppress=True, precision=8)
TOR = pow(10.0, -9)
MAX_ITR = 45
###Output
_____no_output_____
###Markdown
Fixed Point
###Code
class FixedPointMethod(object):
def __init__(self):
return
@abstractmethod
def f(self, x):
return NotImplementedError('Implement f()!')
@abstractmethod
def run(self, x):
return NotImplementedError('Implement run()!')
class FixedPoint1D(FixedPointMethod):
def __init__(self):
super(FixedPointMethod, self).__init__()
def f(self, x):
return math.cos(x)
def g1(self, x):
return np.array(x - pow(x, 3) - 4 * pow(x, 2) + 10)
def g2(self, x):
return np.array(math.sqrt(10 / x - 4 * x))
def g3(self, x):
return np.array(math.sqrt(10 - pow(x, 3)) / 2)
def g4(self, x):
return np.array(math.sqrt(10 / (4 + x)))
def g5(self, x):
return np.array(x - (pow(x, 3) + 4 * pow(x, 2) - 10) / (3 * pow(x, 2) + 8 * x))
def run2(self, x0):
df = pd.DataFrame(columns=['(FP) f(x)'])
row = len(df)
x = x0
df.loc[row] = [x]
for k in range(MAX_ITR):
try:
y = self.f(x)
except ValueError:
break
residual = math.fabs(x - y)
x = y
row = len(df)
df.loc[row] = [y]
if residual < TOR or x > 1e9:
break
return df
def run(self, x0):
"""
given x_0 in R^3 as a starting point.
:param x: x_0 as described
:return: the minimizer x* of f
"""
g = [self.g1, self.g2, self.g3, self.g4, self.g5]
total_df = None
for j in range(len(g)):
df = pd.DataFrame(columns=['g' + str(j + 1) + '(x)'])
row = len(df)
x = x0
df.loc[row] = [x]
for k in range(MAX_ITR):
try:
y = np.array(g[j](x))
except ValueError:
break
residual = math.fabs(x - y)
x = y
row = len(df)
df.loc[row] = [y]
if residual < TOR or x > 1e9:
break
total_df = df if total_df is None else pd.concat([total_df, df], axis=1)
return total_df
class FixedPoint(FixedPointMethod):
def __init__(self):
super(FixedPointMethod, self).__init__()
def f(self, x):
sol = np.zeros(len(x))
sol[0] = math.cos(x[1] * x[2]) / 3.0 + 1.0 / 6.0
sol[1] = math.sqrt(x[0] * x[0] + math.sin(x[2]) + 1.06) / 9.0 - 0.1
sol[2] = -math.exp(-x[0] * x[1]) / 20.0 - (10.0 * math.pi - 3.0) / 60.0
return sol
def run(self, x):
"""
given x_0 in R^3 as a starting point.
:param x: x_0 as described
:return: the minimizer x* of f
"""
df = pd.DataFrame(columns=['x' + str(i + 1) for i in range(len(x))] + ['residual', 'actual-residual'])
row = len(df)
df.loc[row] = [xe for xe in x] + [np.nan, np.nan]
while True:
y = self.f(x)
residual = linalg.norm(x - y, np.inf)
x = y
row = len(df)
df.loc[row] = [ye for ye in y] + [residual, np.nan]
if residual < TOR:
break
for i in range(len(df)):
xk = np.array([df.loc[i][j] for j in range(len(x))])
df.loc[i][4] = linalg.norm(x - xk, np.inf)
return df
class FixedPointAcceleration(FixedPointMethod):
def __init__(self):
super(FixedPointMethod, self).__init__()
def f(self, x):
sol = np.zeros(len(x))
sol[0] = math.cos(x[1] * x[2]) / 3.0 + 1.0 / 6.0
sol[1] = math.sqrt(sol[0] * sol[0] + math.sin(x[2]) + 1.06) / 9.0 - 0.1
sol[2] = -math.exp(-sol[0] * sol[1]) / 20.0 - (10.0 * math.pi - 3.0) / 60.0
return sol
def run(self, x):
"""
given x_0 in R^3 as a starting point.
:param x: x_0 as described
:return: the minimizer x* of f
"""
df = pd.DataFrame(columns=['x' + str(i + 1) for i in range(len(x))] + ['residual', 'actual-residual'])
row = len(df)
df.loc[row] = [xe for xe in x] + [np.nan, np.nan]
while True:
y = self.f(x)
residual = linalg.norm(x - y, np.inf)
x = y
row = len(df)
df.loc[row] = [ye for ye in y] + [residual, np.nan]
if residual < TOR:
break
for i in range(len(df)):
xk = np.array([df.loc[i][j] for j in range(len(x))])
df.loc[i][4] = linalg.norm(x - xk, np.inf)
return df
###Output
_____no_output_____
###Markdown
Newton's Method
###Code
class NewtonMethod(object):
def __init__(self):
return
@abstractmethod
def f(self, x):
return NotImplementedError('Implement f()!')
@abstractmethod
def jacobian(self, x):
return NotImplementedError('Implement jacobian()!')
@abstractmethod
def run(self, x):
return NotImplementedError('Implement run()!')
class Newton1D(NewtonMethod):
def __init__(self):
super(NewtonMethod, self).__init__()
def f(self, x):
return math.cos(x) - x
def jacobian(self, x):
return -math.sin(x) - 1
def run(self, x0):
df = pd.DataFrame(columns=['(NT) f(x)'])
row = len(df)
x = x0
df.loc[row] = [x]
for k in range(MAX_ITR):
try:
y = x - self.f(x) / self.jacobian(x)
except ValueError:
break
residual = math.fabs(x - y)
x = y
row = len(df)
df.loc[row] = [y]
if residual < TOR or x > 1e9:
break
return df
class Newton(NewtonMethod):
def __init__(self):
super(NewtonMethod, self).__init__()
def f(self, x):
sol = np.zeros(len(x))
sol[0] = 3 * x[0] - math.cos(x[1] * x[2]) - 1.0 / 2.0
sol[1] = pow(x[0], 2) - 81 * pow(x[1] + 0.1, 2) + math.sin(x[2]) + 1.06
sol[2] = math.exp(-x[0] * x[1]) + 20 * x[2] + (10 * math.pi - 3.0) / 3.0
return sol
def jacobian(self, x):
jac = np.zeros(shape=(3, 3))
jac[0][0] = 3.0
jac[0][1] = x[2] * math.sin(x[1] * x[2])
jac[0][2] = x[1] * math.sin(x[1] * x[2])
jac[1][0] = 2 * x[0]
jac[1][1] = -162 * (x[1] + 0.1)
jac[1][2] = math.cos(x[2])
jac[2][0] = -x[1] * math.exp(-x[0] * x[1])
jac[2][1] = -x[0] * math.exp(-x[0] * x[1])
jac[2][2] = 20
return jac
def run(self, x):
"""
given x_0 in R^3 as a starting point.
:param x: x_0 as described
:return: the minimizer x* of f
"""
df = pd.DataFrame(columns=['x' + str(i + 1) for i in range(len(x))] + ['residual', 'actual-residual'])
row = len(df)
df.loc[row] = [xe for xe in x] + [np.nan, np.nan]
while True:
jac = self.jacobian(x)
f = -self.f(x)
y = linalg.solve(jac, f)
nx = x + y
residual = linalg.norm(x - nx, np.inf)
x = nx
row = len(df)
df.loc[row] = [nxe for nxe in nx] + [residual, np.nan]
if residual < TOR:
break
for i in range(len(df)):
xk = np.array([df.loc[i][j] for j in range(len(x))])
df.loc[i][4] = linalg.norm(x - xk, np.inf)
return df
###Output
_____no_output_____
###Markdown
Secant Method
###Code
class Secant1D(NewtonMethod):
def __init__(self):
super(NewtonMethod, self).__init__()
def f(self, x):
return math.cos(x) - x
def jacobian(self, x):
return
def run(self, x0):
df = pd.DataFrame(columns=['(ST) f(x)'])
row = len(df)
p0 = x0[0]
p1 = x0[1]
q0 = self.f(p0)
q1 = self.f(p1)
x = p0
df.loc[row] = [x]
for k in range(2, MAX_ITR):
try:
y = p1 - q1 * (p1 - p0) / (q1 - q0)
except ValueError:
break
residual = math.fabs(y - p1)
p0 = p1
q0 = q1
p1 = y
q1 = self.f(y)
row = len(df)
df.loc[row] = [y]
if residual < TOR or x > 1e9:
break
return df
###Output
_____no_output_____
###Markdown
Quasi-Newton Method
###Code
class Broyden(NewtonMethod):
def __init__(self):
super(NewtonMethod, self).__init__()
def f(self, x):
sol = np.zeros(len(x))
sol[0] = 3 * x[0] - math.cos(x[1] * x[2]) - 1.0 / 2.0
sol[1] = pow(x[0], 2) - 81 * pow(x[1] + 0.1, 2) + math.sin(x[2]) + 1.06
sol[2] = math.exp(-x[0] * x[1]) + 20 * x[2] + (10 * math.pi - 3.0) / 3.0
return sol
def jacobian(self, x):
jac = np.zeros(shape=(3, 3))
jac[0][0] = 3.0
jac[0][1] = x[2] * math.sin(x[1] * x[2])
jac[0][2] = x[1] * math.sin(x[1] * x[2])
jac[1][0] = 2 * x[0]
jac[1][1] = -162 * (x[1] + 0.1)
jac[1][2] = math.cos(x[2])
jac[2][0] = -x[1] * math.exp(-x[0] * x[1])
jac[2][1] = -x[0] * math.exp(-x[0] * x[1])
jac[2][2] = 20
return jac
def run(self, x):
"""
given x_0 in R^3 as a starting point.
:param x: x_0 as described
:return: the minimizer x* of f
"""
df = pd.DataFrame(columns=['x' + str(i + 1) for i in range(len(x))] + ['residual', 'actual-residual'])
row = len(df)
df.loc[row] = [xe for xe in x] + [np.nan, np.nan]
A0 = self.jacobian(x)
v = self.f(x)
A = linalg.inv(A0)
s = -A.dot(v)
nx = x + s
row = len(df)
x = nx
residual = linalg.norm(s, 2)
df.loc[row] = [nxe for nxe in nx] + [residual, np.nan]
for k in range(2, MAX_ITR):
w = v
v = self.f(x)
y = v - w
z = -A.dot(y)
p = -s.transpose().dot(z)
u = s.transpose().dot(A)
A = A + 1 / p * np.outer((s + z), u)
s = -A.dot(v)
nx = x + s
residual = linalg.norm(s, 2)
x = nx
row = len(df)
df.loc[row] = [nxe for nxe in nx] + [residual, np.nan]
if residual < TOR:
break
for i in range(len(df)):
xk = np.array([df.loc[i][j] for j in range(len(x))])
df.loc[i][4] = linalg.norm(x - xk, 2)
return df
###Output
_____no_output_____
###Markdown
Steepest Descent
###Code
class SteepestDescentMethod(object):
def __init__(self):
return
@abstractmethod
def f(self, x):
return NotImplementedError('Implement f()!')
@abstractmethod
def g(self, x):
return NotImplementedError('Implement g()!')
@abstractmethod
def grad_g(self, x):
return NotImplementedError('Implement grad_g()!')
@abstractmethod
def jacobian(self, x):
return NotImplementedError('Implement jacobian()!')
@abstractmethod
def run(self, x):
return NotImplementedError('Implement run()!')
class SteepestDescent(SteepestDescentMethod):
def __init__(self):
super(SteepestDescentMethod, self).__init__()
def f(self, x):
sol = np.zeros(len(x))
sol[0] = 3 * x[0] - math.cos(x[1] * x[2]) - 1.0 / 2.0
sol[1] = pow(x[0], 2) - 81 * pow(x[1] + 0.1, 2) + math.sin(x[2]) + 1.06
sol[2] = math.exp(-x[0] * x[1]) + 20 * x[2] + (10 * math.pi - 3.0) / 3.0
return sol
def g(self, x):
sol = self.f(x)
return sum([e * e for e in sol])
def grad_g(self, x):
return 2 * self.jacobian(x).transpose().dot(self.f(x))
def jacobian(self, x):
jac = np.zeros(shape=(3, 3))
jac[0][0] = 3.0
jac[0][1] = x[2] * math.sin(x[1] * x[2])
jac[0][2] = x[1] * math.sin(x[1] * x[2])
jac[1][0] = 2 * x[0]
jac[1][1] = -162 * (x[1] + 0.1)
jac[1][2] = math.cos(x[2])
jac[2][0] = -x[1] * math.exp(-x[0] * x[1])
jac[2][1] = -x[0] * math.exp(-x[0] * x[1])
jac[2][2] = 20
return jac
def run(self, x):
"""
given x_0 in R^3 as a starting point.
:param x: x_0 as described
:return: the minimizer x* of f
"""
df = pd.DataFrame(columns=['x' + str(i + 1) for i in range(len(x))] + ['g', 'residual', 'actual-residual'])
row = len(df)
df.loc[row] = [xe for xe in x] + [self.g(x), np.nan, np.nan]
while True:
prev_x = x
g1 = self.g(x)
z = self.grad_g(x)
z0 = linalg.norm(z, 2)
if z0 == 0.0:
print('Zero gradient')
return x
z /= z0
alpha3 = 1
g3 = self.g(x - alpha3 * z)
while g3 >= g1:
alpha3 /= 2.0
g3 = self.g(x - alpha3 * z)
if alpha3 < TOR / 2.0:
print('No likely improvement')
return x
alpha2 = alpha3 / 2.0
g2 = self.g(x - alpha2 * z)
h1 = (g2 - g1) / alpha2
h2 = (g3 - g2) / (alpha3 - alpha2)
h3 = (h2 - h1) / alpha3
alpha0 = (alpha2 - h1 / h3) / 2.0
g0 = self.g(x - alpha0 * z)
alpha = alpha0
g = g0
if g3 < g:
alpha = alpha3
g = g3
x = x - alpha * z
residual = linalg.norm(x - prev_x, np.inf)
row = len(df)
df.loc[row] = [nxe for nxe in x] + [g, residual, np.nan]
if math.fabs(g - g1) < TOR:
break
for i in range(len(df)):
xk = np.array([df.loc[i][j] for j in range(len(x))])
df.loc[i][5] = linalg.norm(xk - x, np.inf)
return df
###Output
_____no_output_____
###Markdown
Run 1-Dim Fixed Point
###Code
pd.options.display.float_format = '{:,.9f}'.format
x0 = np.array(1.5)
FixedPoint1D().run(x0).astype(np.float64)
###Output
_____no_output_____
###Markdown
Newton's method
###Code
pd.options.display.float_format = '{:,.10f}'.format
x0 = math.pi / 4.0
pd.concat([FixedPoint1D().run2(x0), Newton1D().run(x0)], axis=1)
###Output
_____no_output_____
###Markdown
Secant Method
###Code
pd.options.display.float_format = '{:,.10f}'.format
x0 = 0.5
x1 = math.pi / 4.0
pd.concat([Newton1D().run(x1), Secant1D().run([x0, x1])], axis=1)
###Output
_____no_output_____
###Markdown
Multi-Dim Fixed Point
###Code
pd.options.display.float_format = '{:,.8f}'.format
x0 = np.array([0.1, 0.1, -0.1])
FixedPoint().run(x0)
x0 = np.array([0.1, 0.1, -0.1])
FixedPointAcceleration().run(x0)
###Output
_____no_output_____
###Markdown
Newton's method
###Code
pd.options.display.float_format = '{:,.10f}'.format
x0 = np.array([0.1, 0.1, -0.1])
Newton().run(x0)
###Output
_____no_output_____
###Markdown
Quasi-Newton method
###Code
pd.options.display.float_format = '{:,.8f}'.format
x0 = np.array([0.1, 0.1, -0.1])
Broyden().run(x0)
###Output
_____no_output_____
###Markdown
Steepest Descent
###Code
pd.options.display.float_format = '{:,.6f}'.format
x0 = np.array([0, 0, 0])
SteepestDescent().run(x0)
###Output
_____no_output_____ |
media/f16-scientific-python/PythonScientificWS-Week1.ipynb | ###Markdown
Goal of Week 1This week, we will:+ Spend a little bit of time getting farmiliarized with the awesomeness of Jupyter Notebook (and also learn a few oddities that comes with it)+ Try a few simple programs. + Learn the difference between Print() and Return. + And if we have time, we can load data from an excel/csv file. Note+ Remember to run each invidual cell you can hit `Ctrl+Enter`+ If you want to add a cell above or below, you can hit `Esc + A` or `Esc + B`. Or you can go to the toolbar at the top -> Insert -> Insert Cell Above/Insert Cell Below+ If you want to run all the cells altogether, go to the toolbar -> Cell -> Run All.+ In the case something catastrophic happens and everything explodes, keep calmn, go to the the toolbar -> Kernal -> Restart Let's get started with our first program!
###Code
print("Hello World")
###Output
Hello World
###Markdown
Awesome. Our "program" prints out exactly what we want to see. Now let's try to do some simple arithmetic.
###Code
6 + 6
###Output
_____no_output_____
###Markdown
As you can see, the beauty of using a notebook is that it's very interactive. Unlike when you use IDLE or an IDE like Pycharm, each of the cell in this notebook can be indepently ran on its own OR be used all together. It also shows you the result right away. Let's try some more simple operations.
###Code
6 - 4
6*6
6**6
6%4
6//4
###Output
_____no_output_____
###Markdown
So that's all pretty neat. Let's see now what will happen if we want to define some variables.
###Code
a = 6
b = 10
my_sum = a+b
###Output
_____no_output_____
###Markdown
If you just run the cell just like that, nothing will show up. This is because all you did was telling these variables to be something. You haven't called them out yet. Let's do that then.
###Code
a
b
my_sum
###Output
_____no_output_____
###Markdown
That's neat. But let's say I want to print my result out a little bit more explicitly with some annotation of what the program is showing.
###Code
print("My result is" + my_sum)
###Output
_____no_output_____
###Markdown
That's weird. It didn't work. Let's google this [TypeError: Can't convert 'int' object to str implicitly] error to see what it means. I found this: http://stackoverflow.com/questions/13654168/typeerror-cant-convert-int-object-to-str-implicitlyAccording to the top answer, "You cannot concatenate a string with an int. You would need to convert your int to string using str function, or use formatting to format your output."So let's try the top answerer's solution now. We will put the str() surround my_sum to see if it works.
###Code
print("My result is " + str(my_sum))
###Output
My result is 16
###Markdown
Awesome. So now we know my_sum was apparently recognized and interpreted as interger in Python so when we try to add it to a word, Python couldn't figure out what we were trying to do. Now it can understand that yes, temporarily, we want to treat my_sum as a word so we can print it out. Since I want to remember why I put my_sum in the str() thingy. I'm going to comment about it so I can remember later. Comments in Python are preceded with ``. Everytime Python sees this ``, it's going to ignore the rest of the line
###Code
print("My commented result is " + str(my_sum)) # cast my_sum as a string so I can print the whole statement
###Output
My commented result is 16
###Markdown
Great! Now let's go back to when we did our first program, "Hello World". What we just did was **printing** the words "Hello World". Let's see what happens when we try to do **return** instead of print.
###Code
return ('Hello World')
###Output
_____no_output_____
###Markdown
You can see that if you just return ("Hello World") instead of print, Python will give you a [SyntaxError: 'return' outside function error]. Let's search what the error is.Not sure what everything means, but this might look like a problem:http://stackoverflow.com/questions/26824659/python-return-outside-function-syntax-errorThe return command is part of the function definition, so must be indented under the function introduction (which is def donuts()). If it weren't, there would be no way for the interpreter to know that the return statement wasn't just a part of your broader code.So it seems like return only works if it's called within a thing called "function". Let's try to do that.
###Code
def returnHelloWorld():
return ("Hello World")
returnHelloWorld()
# if you just run this cell by itself NameError: name 'printHelloWorld' is not defined
###Output
_____no_output_____
###Markdown
Note that if you haven't ran the last cell, Python is going to be very confused and give a `NameError: name 'printHelloWorld' is not defined error`This is a very odd quirk of using Jupyter Notebook. The reason is each of these cells are acting indepently from each other. So if you just wrote the cell and don't run it (either using Ctrl+Enter or by navigating to Cell -> Run cells), the program will never be recognized. So let's run the def printHelloWorld cell and see what happens when we call it again.
###Code
returnHelloWorld()
###Output
_____no_output_____
###Markdown
Awesome! Now it works! Now before we go into something a little bit more "advanced", this will be a really good opportunity for me to show some difference between **print** and **return**. Hopefully you will never have to run the agony of figuring out what went wrong with your functions. I'm going to write a new function that will print, not return, Hello World. In Python, a function starts with **def**. Not sure why they did it that way, but we are going to go with it.```pythondef nameOfFunction(stuffYouWantToPassInIfThereIsAny): stuff you want to do note how everything inside is indented```
###Code
def printHelloWorld():
print('Hello World')
###Output
_____no_output_____
###Markdown
Let's call printHelloWorld now
###Code
printHelloWorld()
###Output
Hello World
###Markdown
OK Quynh, this seems exactly the same as what returnHelloWorld did. What's the difference?There's a difference! I will show you. Let's say now I want to have a new variable named "my_result", and I want to assign whatever result I get from the HelloWorld function to the variable "my_result". Then, I want whatever contained in my_result and add ", I'm here" to the end of it. Let's see how it works out.
###Code
my_result1 = returnHelloWorld()
my_result1 + ", I'm here"
my_result2 = printHelloWorld()
my_result2 + ", I'm here"
###Output
_____no_output_____ |
demo/HumpbackWhaleSegmentation.ipynb | ###Markdown
Prep and Setup Install Detectron2
###Code
!pip install -q pyyaml==5.1
# This is the current pytorch version on Colab. Uncomment this if Colab changes its pytorch version
# !pip install torch==1.9.0+cu102 torchvision==0.10.0+cu102 -f https://download.pytorch.org/whl/torch_stable.html
# Install detectron2 that matches the above pytorch version
# See https://detectron2.readthedocs.io/tutorials/install.html for instructions
!pip install -q detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html
# exit(0) # After installation, you need to "restart runtime" in Colab. This line can also restart runtime
###Output
_____no_output_____
###Markdown
Download code from repository
###Code
! git clone https://github.com/seqryan/HumpbackWhaleSegmentation.git
###Output
_____no_output_____
###Markdown
Download dataset
###Code
% cd /content/HumpbackWhaleSegmentation
# Read instructions to generate kaggle.json file: https://www.kaggle.com/general/74235
! pip install -q --upgrade --force-reinstall --no-deps kaggle
from google.colab import files
print("Upload kaggle.json")
files.upload() # Upload the kaggle.json to cwd
# create ~/.kaggle folder and move the file kaggle.json to this folder
! mkdir -p ~/.kaggle
! mv kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
# download the dataset
# Ensure that you agree to the terms of the competition beflore downloading the dataset to avoid 403 - Forbidden error.
# Competition URL: https://www.kaggle.com/c/whale-categorization-playground
! kaggle competitions download -c whale-categorization-playground
# unzip and move the data to dataset directory
! mkdir dataset
! unzip -q whale-categorization-playground.zip -d dataset
! rm whale-categorization-playground.zip # cleanup to save disk space
###Output
_____no_output_____
###Markdown
Download annotations
###Code
% cd /content/HumpbackWhaleSegmentation
!wget https://github.com/seqryan/HumpbackWhaleSegmentation/releases/download/v0.1/detectron2-whale-segmentation-annotations.zip
!unzip -q detectron2-whale-segmentation-annotations.zip -d .
###Output
_____no_output_____
###Markdown
Set all relevant paths
###Code
% cd /content/HumpbackWhaleSegmentation
ANNOTATIONS_FILE_NAME = 'Whale_Segmentation.json'
DATASET_DIR = 'dataset'
SAVE_WEIGHTS_DIR = 'model_weights'
LOAD_WEIGHTS_DIR = 'model_weights' # change to differnt path if you are using pretrained weights
OUTPUT_DIR = 'segmented_dataset'
###Output
/content/HumpbackWhaleSegmentation
###Markdown
Train Segmentation Model
###Code
% cd /content/HumpbackWhaleSegmentation
! python run.py train -s $SAVE_WEIGHTS_DIR -a $ANNOTATIONS_FILE_NAME -d $DATASET_DIR
###Output
_____no_output_____
###Markdown
Load pretrained weights and preview generate segmented images
###Code
% cd /content/HumpbackWhaleSegmentation
!wget https://github.com/seqryan/HumpbackWhaleSegmentation/releases/download/v0.1/detectron2-whale-segmentation-weights.zip
!unzip -q detectron2-whale-segmentation-weights.zip -d .
LOAD_WEIGHTS_DIR = 'pretrained_weights' # change to differnt path if you are using pretrained weights
% cd /content/HumpbackWhaleSegmentation
! python run.py save -l $SAVE_WEIGHTS_DIR -a $ANNOTATIONS_FILE_NAME -d $DATASET_DIR -o $OUTPUT_DIR
! zip -q -r segmented_dataset.zip $OUTPUT_DIR
###Output
_____no_output_____
###Markdown
Preview segmented results
###Code
from matplotlib import pyplot as plt
import random
from matplotlib.pyplot import figure
figure(num=None,figsize=(10,10),dpi=80,facecolor='w',edgecolor='k')
images_names = os.listdir(os.path.join(OUTPUT_DIR, 'train'))
plt.axis('off')
samples = 5
index = 0
for d in random.sample(images_names, samples):
index += 1
ax = plt.subplot(samples, 2, index)
ax.imshow(cv2.imread(os.path.join(DATASET_DIR, 'train', d)))
ax.axis('off')
index += 1
ax = plt.subplot(samples, 2, index)
ax.imshow(cv2.imread(os.path.join(OUTPUT_DIR, 'train', d)))
ax.axis('off')
plt.show()
###Output
_____no_output_____ |
app/mstar_sar_classification_demo.ipynb | ###Markdown
MSTAR SAR Image Classifcation: OpenVINO version check:You are currently using the latest development version of Intel® Distribution of OpenVINO™ Toolkit. Alternatively, you can open a version of this notebook for the Intel® Distribution of OpenVINO™ Toolkit LTS version by [clicking this link](../../../../openvino-lts/developer-samples/python/mstar-sar-classification-python/mstar_sar_classification_demo.ipynb). This sample showcases the use of the **Intel® Distribution of OpenVINO™** toolkit to optimize and deploy an internally developed ResNET18 model that classifies Synthetic Aperture Radar (SAR) images associated with 10 separate military vehicle classifiers, such as tanks and armored vehicles. The deployed model processes 3,606 Synthetic Aperture Radar (SAR) images across 10 target classifiers in order to benchmark the model’s Frames Per Second and the Seconds Per Frame across Intel’s hardware portfolio available on Dev Cloud. Overview of how it worksAt start-up the sample application reads the command line arguments and loads a network and SAR input image to the Inference Engine (IE) plugin. A job is submitted to an edge compute node with a hardware accelerator such as Intel® HD Graphics GPU and Intel® Movidius™ Neural Compute Stick 2.After the inference is completed on all 3600+ images, the number of correct/incorrect images are stored in the /results directory. Demonstration objectives * Image as input is supported using **OpenCV** * Inference performed on edge hardware (rather than on the development node hosting this Jupyter notebook) * Accurate classification of Synthetic Aperture Radar images PrerequisitesThis sample requires the following:- All files are present and in the following directory structure: - **mstar_sar_classification_demo.ipynb** - This Jupyter* Notebook - **mstar_sar_classification_run_all.py** - Python* code for SAR image classification application - **/data/reference-sample-data/mstar-sar-classification-python/model/mstar_sar.pb** - TensorFlow frozen graph of pretrained ResNet18 model (provided) - **/data/reference-sample-data/mstar-sar-classification-python/TEST** - Directory containing test images for SAR model - **images/HB15087.jpg** - Test image for Jupyter* Notebook demonstration It is recommended that you have already read the following from [Get Started on the Intel® DevCloud for the Edge](https://devcloud.intel.com/edge/home/):- [Overview of the Intel® DevCloud for the Edge](https://devcloud.intel.com/edge/get_started/devcloud/)- [Overview of the Intel® Distribution of OpenVINO™ toolkit](https://devcloud.intel.com/edge/get_started/openvino/)Note: It is assumed that the server this sample is being run on is on the Intel® DevCloud for the Edge which has Jupyter Notebook customizations and all the required libraries already installed. If you download or copy to a new server, this sample may not run. Set Up Import dependenciesRun the below cell to import Python dependencies needed for displaying the results in this notebook. Tip: Select a cell and then use **Ctrl+Enter** to run that cell.
###Code
from __future__ import print_function
import os
import sys
import cv2
import numpy as np
import logging as log
import os.path as ops
import matplotlib.pyplot as plt
from qarpo.demoutils import *
from IPython.display import HTML
from argparse import ArgumentParser
from openvino.inference_engine import IECore
from time import time
warnings.filterwarnings('ignore',category=FutureWarning)
###Output
_____no_output_____
###Markdown
Using Intel® Distribution of OpenVINO™ toolkitFirst, let's try running inference on a single image to see how the Intel® Distribution of OpenVINO™ toolkit works.We will be using Intel® Distribution of OpenVINO™ toolkit Inference Engine (IE) to classify military vehicles as seen in SAR imagery.There are five steps involved in this task:1. Create an Intermediate Representation (IR) Model using the Model Optimizer by Intel2. Choose a device and create IEPlugin for the device3. Read the Model's IR using IENetwork4. Load the IENetwork into the Plugin5. Run inference. Create Intermediate Representation of the Model using Model Optimizer[Model Optimizer](http://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) creates the Intermediate Representation of the model which is the device-agnostic, generic optimization of the model. Caffe*, TensorFlow*, MXNet*, ONNX*, and Kaldi* models are supported by Model Optimizer. Tip: the '!' is a special Jupyter Notebook command that allows you to run shell commands as if you are in a command line. So the above command will work in a terminal (with '!' removed). **Run the cell below to generate an FP32 precision IR of the ResNet18 model**
###Code
!mo_tf.py \
--input_model /data/reference-sample-data/mstar-sar-classification-python/model/mstar_sar.pb \
--data_type FP32 \
-o models/FP32 \
--input_shape "[1,128,128,1]"
###Output
_____no_output_____
###Markdown
Note: the previous code cell is a single command line input, which spans 5 lines due to the backslash '\\', which is a line continuation character in Bash. Here, the arguments are:* --input_model : the original model* --data_type : Data type to use. One of {FP32, FP16, half, float}* --input_shape: shape of data input to the model [N, H, W, C]* -o : output directoryThis script also supports `-h` that will you can get the full list of arguments that the script accepts. With the `-o` option set as above, this command will write the output to the directory `model/FP32`There are two files produced:```models/FP32/mstar_sar.xmlmodels/FP32/mstar_sar.bin```These will be used later in the exercise.**We will also build an FP16 precision IR for the model using the same process.**This will produce two files as well:```models/FP16/mstar_sar.xmlmodels/FP16/mstar_sar.bin```
###Code
!mo_tf.py \
--input_model /data/reference-sample-data/mstar-sar-classification-python/model/mstar_sar.pb \
--data_type FP16 \
-o models/FP16 \
--input_shape "[1,128,128,1]"
###Output
_____no_output_____
###Markdown
Run Inference ExampleInitially we will build an example of running inference on the an image of a **ZSU-23-4 tank**. The image on the **left** shows the tank from the side with a standard RGB camera, while the SAR image looks significantly different as shown on the **right** due to being taken from overhead and being captured using SAR image sensors. Define variables
###Code
model_xml='models/FP32/mstar_sar.xml'
device_arg='CPU'
input_arg=['images/HB15087.jpg']
class_labels=['ZSU_23_4']
iterations=10
perf_counts=False
labels = np.array(['2S1', 'BMP2', 'BRDM_2', 'BTR70', 'BTR_60', 'D7', 'T62', 'T72', 'ZIL131', 'ZSU_23_4'])
log.basicConfig(format="[ %(levelname)s ] %(message)s", level=log.INFO, stream=sys.stdout)
###Output
_____no_output_____
###Markdown
Inference Engine initialization and load extensions libraryWe initialize the Inference Engine by calling the class **`IECore()`**. For more details on **`IECore`** see the IECore Documentation.
###Code
ie = IECore()
###Output
_____no_output_____
###Markdown
Read IRWe can use the **`IECore`** function **`read_network`** to import the optimized network IR.
###Code
model_bin = os.path.splitext(model_xml)[0] + ".bin"
net = ie.read_network(model=model_xml, weights=model_bin)
###Output
_____no_output_____
###Markdown
Preparing input blobs
###Code
input_blob = next(iter(net.input_info))
out_blob = next(iter(net.outputs))
net.batch_size = len(input_arg)
###Output
_____no_output_____
###Markdown
Read and pre-process input imagesFirst let's load the image using OpenCV.We will also have to do some shape manipulation to convert the image to a format that is compatible with our network.
###Code
n, c, h, w = net.input_info[input_blob].input_data.shape
images = np.ndarray(shape=(n, c, h, w))
for i in range(n):
image = cv2.imread(input_arg[i], 0) # Read image as greyscale
if image.shape[:-1] != (h, w):
log.warning("Image {} is resized from {} to {}".format(input_arg[i], image.shape, (h, w)))
image = cv2.resize(image, (w, h))
# Normalize to keep data between 0 - 1
image = (np.array(image) - 0) / 255.0
# Change data layout from HWC to CHW
image = image.reshape((1, 1, h, w))
images[i] = image
log.info("Batch size is {}".format(n))
###Output
_____no_output_____
###Markdown
Loading model to the pluginOnce we have the Inference Engine and the network, we can load the network into the Inference Engine using **`ie.load_network`**.
###Code
exec_net = ie.load_network(network=net, device_name=device_arg)
###Output
_____no_output_____
###Markdown
Run InferenceWe can now run inference on the object **`exec_net`** using the **`infer`** function call.
###Code
infer_time = []
for i in range(iterations):
t0 = time()
result = exec_net.infer(inputs={input_blob: images})
infer_time.append((time()-t0)*1000)
result = result[out_blob]
log.info("Average running time of one iteration: {} ms".format(np.average(np.asarray(infer_time))))
###Output
_____no_output_____
###Markdown
Processing output blobThe network outputs a tensor of dimension 10 (number of classes), these values represent the probability that the image is a particular class. To make the final classification we find the class with the highest probability.
###Code
# Access the results and get the index of the highest confidence score
# Predicted class index.
for index, res in enumerate(result):
class_num = np.argmax(res)
if labels[class_num] == class_labels[index]:
print ("Result correctly classified as: {}".format(labels[class_num]))
else:
print ("Result incorrectly classified as: {}".format(labels[class_num]))
###Output
_____no_output_____
###Markdown
Job SubmissionThe inference code is already implemented in mstar_sar_classification_run_all.py.The Python code takes in command line arguments for images, model etc.**Command line arguments options and how they are interpreted in the application source code**```python3 mstar_sar_classification_run_all.py -m ${MODELPATH} \ -i ${INPUT_ADDR} \ -o ${OUTPUT_DIR} \ -d ${DEVICE} \ -pc```**The description of the arguments used in the argument parser is the command line executable equivalent.*** -m Location of the model's IR file (.xml + .bin) which has been converted using the **model optimizer**. There is automated support built in this argument to support both FP32 and FP16 models targeting different hardware* -i Path of the input images * -o Location where the output file with inference needs to be stored. (results/)* -d Type of Hardware Acceleration (CPU, GPU, MYRIAD)* -l Absolute path to extension library file to load to a plugin. (Optional)* -pc Report individual model layer performance counts will be reported to log file in the results/ directory (or as specified with -o argument) Creating the job fileWe will run inference on several different edge compute nodes present in the Intel® DevCloud for the Edge. We will send work to the edge compute nodes by submitting the corresponding non-interactive jobs into a queue. For each job, we will specify the type of the edge compute server that must be allocated for the job.The job file is a [Bash](https://www.gnu.org/software/bash/) script that serves as a wrapper around the Python* executable of our application that will be executed directly on the edge compute node. One purpose of the job file is to simplify running an application on different compute nodes by accepting a few arguments and then performing accordingly any necessary steps before and after running the application executable. For this sample, the job file we will be using is already written for you and appears in the next cell. The job file will be submitted as if it were run from the command line using the following format:```bashmstar_sar_job.sh ```Where the job file input arguments are:- - Output directory to use to store output files- - Hardware device to use (e.g. CPU, GPU, etc.)- - Which floating point precision inference model to use (FP32 or FP16)- - Path to input image file(s)Run the following cell to create the `mstar_sar_job.sh` job file. The [`%%writefile`](https://ipython.readthedocs.io/en/stable/interactive/magics.htmlcellmagic-writefile) line at the top will write the cell contents to the specified job file `mstar_sar_job.sh`.
###Code
%%writefile mstar_sar_job.sh
# MSTAR SAR job script writes output to a file inside a directory. We make sure that this directory exists.
# The output directory is the first argument of the bash script
OUTPUT_DIR=$1
DEVICE=$2
FP_MODEL=$3
INPUT_ADDR=$4
# The default path for the job is your home directory, so we change directory to where the files are.
cd $PBS_O_WORKDIR
mkdir -p $1
SAMPLEPATH=$PBS_O_WORKDIR
python3 mstar_sar_classification_run_all.py -m models/$3/mstar_sar.xml \
-i $4 \
-o $1 \
-d $2 \
-pc
###Output
_____no_output_____
###Markdown
Understand how jobs are submitted into the queueNow that we have the job script, we can submit the jobs to edge compute nodes. In the IoT DevCloud, you can do this using the `qsub` command.We can submit `mstar_sar_job.sh` to several different types of edge compute nodes simultaneously or just one node at a time.There are three options of `qsub` command that we use for this:- `-l` : this option lets us select the number and the type of nodes using `nodes={node_count}:{property}`. - `-F` : this option lets us send arguments to the bash script. - `-N` : this option lets us name the job so that it is easier to distinguish between them.The `-F` flag is used to pass in arguments to the job script.The [mstar_sar_job.sh](mstar_sar_job.sh) takes in 4 arguments:1. the path to the directory for the performance stats2. targeted device (e.g. CPU, GPU, MYRIAD)3. the floating precision to use for inference4. the path to the input imagesThe job scheduler will use the contents of `-F` flag as the argument to the job script.If you are curious to see the available types of nodes on the IoT DevCloud, run the following optional cell.
###Code
!pbsnodes | grep compnode | awk '{print $3}' | sort | uniq -c
###Output
_____no_output_____
###Markdown
Here, the properties describe the node, and number on the left is the number of available nodes of that architecture. Job queue submissionEach of the 5 cells below will submit a job to different edge compute nodes.The output of the cell is the `JobID` of your job, which you can use to track progress of a job.Note: You may submit all jobs at once or one at a time. After submission, they will go into a queue and run as soon as the requested compute resources become available. Tip: **Shift+Enter** will run the cell and automatically move you to the next cell. This allows you to use **Shift+Enter** multiple times to quickly run through multiple cells, including markdown cells.
###Code
os.environ["IMAGES"] = "/data/reference-sample-data/mstar-sar-classification-python/TEST"
###Output
_____no_output_____
###Markdown
Submitting to an edge compute node with an Intel® CPUIn the cell below, we submit a job to an <a href="https://software.intel.com/en-us/iot/hardware/iei-tank-dev-kit-core">IEI Tank* 870-Q170 edge node with an <a href="https://ark.intel.com/content/www/us/en/ark/products/97121/intel-core-i5-7500t-processor-6m-cache-up-to-3-30-ghz.html">Intel® Core™ i5-7500T processor. The inference workload will run the CPU.
###Code
#Submit job to the queue
job_id_core_kaby = !qsub mstar_sar_job.sh -l nodes=1:idc006kbl -F "results/ CPU FP32 $IMAGES" -N mstar_sar_core_kaby
print(job_id_core_kaby[0])
jobid_core_kaby = job_id_core_kaby[0].split('.')[0]
#Progress indicators
if job_id_core_kaby:
progressIndicator('results/', 'i_progress_'+job_id_core_kaby[0]+'.txt', "Inference", 0, 100)
else:
print("Error in job submission.")
###Output
_____no_output_____
###Markdown
Submit to an edge compute node with Intel® Xeon® Gold 6258R CPUIn the cell below, we submit a job to an edge node with an [Intel® Xeon® Gold 6258R Processor](https://ark.intel.com/content/www/us/en/ark/products/199350/intel-xeon-gold-6258r-processor-38-5m-cache-2-70-ghz.html). The inference workload will run on the CPU.
###Code
#Submit job to the queue
job_id_xeon_cascade_lake = !qsub mstar_sar_job.sh -l nodes=1:idc018 -F "results/ CPU FP32 $IMAGES" -N mstar_sar_xeon_cascade_lake
print(job_id_xeon_cascade_lake[0])
jobid_xeon_cascade_lake = job_id_xeon_cascade_lake[0].split('.')[0]
#Progress indicators
if job_id_xeon_cascade_lake:
progressIndicator('results/', 'i_progress_'+job_id_xeon_cascade_lake[0]+'.txt', "Inference", 0, 100)
else:
print("Error in job submission.")
###Output
_____no_output_____
###Markdown
Submit to an edge compute node with Intel® Xeon® E3-1268L v5 CPUIn the cell below, we submit a job to an <a href="https://software.intel.com/en-us/iot/hardware/iei-tank-dev-kit-core">IEI Tank* 870-Q170 edge node with an <a href="https://ark.intel.com/products/88178/Intel-Xeon-Processor-E3-1268L-v5-8M-Cache-2-40-GHz-">Intel® Xeon® Processor E3-1268L v5. The inference workload will run on the CPU.
###Code
#Submit job to the queue
job_id_xeon_skylake = !qsub mstar_sar_job.sh -l nodes=1:idc007xv5 -F "results/ CPU FP32 $IMAGES" -N mstar_sar_xeon_skylake
print(job_id_xeon_skylake[0])
jobid_xeon_skylake = job_id_xeon_skylake[0].split('.')[0]
#Progress indicators
if job_id_xeon_skylake:
progressIndicator('results/', 'i_progress_'+job_id_xeon_skylake[0]+'.txt', "Inference", 0, 100)
else:
print("Error in job submission.")
###Output
_____no_output_____
###Markdown
Submitting to an edge compute node with Intel® Core i7 CPU and using the onboard Intel® GPU (UHD-620)In the cell below, we submit a job to an <a href="https://www.aaeon.com/en/p/iot-gateway-node-systems-upx-edge">UPX-Edge edge node with an Intel® Core i7-8665UE. The inference workload will run on the Intel® UHD Graphics 620 card integrated with the CPU.
###Code
#Submit job to the queue
job_id_corei7_gpu = !qsub mstar_sar_job.sh -l nodes=1:idc014upxa10fx1 -F "results/ GPU FP32 $IMAGES" -N mstar_sar_corei7_gpu
print(job_id_corei7_gpu[0])
jobid_corei7_gpu = job_id_corei7_gpu[0].split('.')[0]
#Progress indicators
if job_id_corei7_gpu:
progressIndicator('results/', 'i_progress_'+job_id_corei7_gpu[0]+'.txt', "Inference", 0, 100)
else:
print("Error in job submission.")
###Output
_____no_output_____
###Markdown
Submitting to an edge compute node with Intel® Core i5 CPU and using the onboard Intel® GPU (HD-630)In the cell below, we submit a job to an <a href="https://software.intel.com/en-us/iot/hardware/iei-tank-dev-kit-core">IEI Tank* 870-Q170 edge node with an Intel® Core i5-7500T. The inference workload will run on the Intel® HD Graphics 630 card integrated with the CPU.
###Code
#Submit job to the queue
job_id_corei5gpu = !qsub mstar_sar_job.sh -l nodes=1:idc006kbl -F "results/ GPU FP32 $IMAGES" -N mstar_sar_corei5gpu
print(job_id_corei5gpu[0])
jobid_corei5gpu = job_id_corei5gpu[0].split('.')[0]
#Progress indicators
if job_id_corei5gpu:
progressIndicator('results/', 'i_progress_'+job_id_corei5gpu[0]+'.txt', "Inference", 0, 100)
else:
print("Error in job submission.")
###Output
_____no_output_____
###Markdown
Check if the jobs are doneTo check on the jobs that were submitted, use the `qstat` command.We have created a custom Jupyter widget to get live qstat update.Run the following cell to bring it up.
###Code
liveQstat()
###Output
_____no_output_____
###Markdown
You should see the jobs you have submitted (referenced by `Job ID` that gets displayed right after you submit the job(s)).There should also be an extra job in the queue named `jupyterhub-singleuser`: this job is your current Jupyter* Notebook session which is always runningThe 'S' column shows the current status. - If it is in Q state, it is in the queue waiting for available resources. - If it is in R state, it is running. - If the job is no longer listed, it means it is completed.Note: The amount of time spent in the queue depends on the number of users accessing the requested compute nodes. Once the jobs for this sample application begin to run, they should take from 1 to 5 minutes each to complete. Wait!: Please wait for the inference jobs to complete before proceeding to the next step to view results. View ResultsOnce the jobs are completed, the queue system outputs the stdout and stderr streams of each job into files with names of the form`mstar_sar_{type}.o{JobID}``mstar_sar_{type}.e{JobID}`(here, mstar_sar_{type} corresponds to the `-N` option of qsub). Assess PerformanceThe running time of each inference task is recorded in `results/stats_job_id_{ARCH}.txt`, where the subdirectory name corresponds to the architecture of the target edge compute node. Run the cell below to plot the results of all jobs side-by-side. Lower values mean better performance. Keep in mind that some architectures are optimized for the highest performance, others for low power or other metrics.
###Code
arch_list = [('core_kaby', 'Intel Core\ni5-7500T\nCPU'),
('xeon_cascade_lake', 'Intel Xeon\nGold\n 6258R\nCPU'),
('xeon_skylake', 'Intel Xeon\nE3-1268L v5\nCPU'),
('corei7_gpu', ' Intel Core\ni7-8665UE\nGPU'),
('corei5gpu', 'Intel\nCore\ni5-7500T\nGPU')]
stats_list = []
for arch, a_name in arch_list:
if 'job_id_'+arch in vars():
stats_list.append(('results/stats_'+vars()['job_id_'+arch][0]+'.txt', a_name))
else:
stats_list.append(('placeholder'+arch, a_name))
summaryPlot(stats_list, 'Architecture', 'Time per Image, seconds', 'Inference Engine Processing Time per Image', 'time' )
summaryPlot(stats_list, 'Architecture', 'Frames per second', 'Inference Engine FPS', 'fps' )
###Output
_____no_output_____
###Markdown
Telemetry DashboardOnce your submitted jobs are completed, run the cells below to generate links to view telemetry dashboards containing performance metrics for your model and target architecture.
###Code
link_t = "<a target='_blank' href='{href}'> Click here to view telemetry dashboard of the last job ran on Intel® Core™ i5-7500T</a>"
result_file = "https://devcloud.intel.com/edge/metrics/d/" + jobid_core_kaby
html = HTML(link_t.format(href=result_file))
display(html)
link_t = "<a target='_blank' href='{href}'> Click here to view metering dashboard of the last job ran on Intel® Xeon® Gold 6258R CPU</a>"
result_file = "https://devcloud.intel.com/edge/metrics/d/" + jobid_xeon_cascade_lake
html = HTML(link_t.format(href=result_file))
display(html)
link_t = "<a target='_blank' href='{href}'> Click here to view metering dashboard of the last job ran on Intel® Xeon® E3-1268L CPU</a>"
result_file = "https://devcloud.intel.com/edge/metrics/d/" + jobid_xeon_skylake
html = HTML(link_t.format(href=result_file))
display(html)
link_t = "<a target='_blank' href='{href}'> Click here to view metering dashboard of the last job ran on Intel® Core i7-8665UE CPU and using the onboard Intel® GPU</a>"
result_file = "https://devcloud.intel.com/edge/metrics/d/" + jobid_corei7_gpu
html = HTML(link_t.format(href=result_file))
display(html)
link_t = "<a target='_blank' href='{href}'> Click here to view metering dashboard of the last job ran on Intel® Core i5-7500T CPU and using the onboard Intel® GPU</a>"
result_file = "https://devcloud.intel.com/edge/metrics/d/" + jobid_corei5gpu
html = HTML(link_t.format(href=result_file))
display(html)
###Output
_____no_output_____ |
05. OOPS Part-1/2.Instance Attributes.ipynb | ###Markdown
Instance Attributes
###Code
class Student:
pass # Let's python know we didn't forget the class body
s1 = Student()
s2 = Student()
# currently my object doesn't have any data (Attributes) so lets add it
s1.name = 'Ansh'
# dont worry we will look later how we can add data into our class so we dont
# have to repeat everytime
s2.rollno = 19162151009
s2.last_name = 'Patel'
print(s1.name)
print(s2.rollno)
print(s2.last_name)
s2.name # s2 doesn't have name data so its giving us error
s1.__dict__ # this way you can see all the attributes in s1
# but why __dict__ function is returning dictionary because we are assigning
# data (attributes) to our object in <object.key = value> form
s2.__dict__
# another function is
hasattr(s1,'name')
hasattr(s1,'rollno')
# another function is getattr()
getattr(s1,'name')
# now to delete an attribute we will use delattr()
delattr(s1,'name')
s1.__dict__
###Output
_____no_output_____ |
yelp.ipynb | ###Markdown
Scraping Yelp Yelp is a web application created to connect people with great local businesses.It is useful for finding restaurants, bars and various types of services. The site features reviews of an active and well-informed local community.Site: www.yelp.comIn this document, I describe a scraping script that extracts information about the restaurants in San Francisco from Yelp.Also, I present a data cleaning process and a simple data analysis of restaurants by location. Scraping data from yelp.comIn this scraping script, I will take the following information for each San Francisco's restaurant:* name* neighborhood* address* phone* reviews
###Code
# Import packages
import requests
from bs4 import BeautifulSoup
# Request header
headers = {"User-Agent": "Chrome/68.0.3440.106"}
# Request parameters
parameters = {
'find_desc': 'Restaurants',
'find_loc':'San Francisco CA',
'start':'0'}
# Yelp search url
url='https://www.yelp.com/search'
# Get the number of pages return by request (Ex: 'Page 1 of 34')
# Each page shows 30 items
response = requests.get(url, params=parameters)
# Create BeautifulSoup Object
soup = BeautifulSoup(response.text, 'html.parser')
# Extract the number of pages
string_pages = soup.findAll('div', attrs={'class':'page-of-pages arrange_unit arrange_unit--fill'})[0].text
number_of_pages = int(string_pages.split('of ')[1])
# Variable that stores the data
data = []
# Start page
pagination = 0
# For each pagination extract the items
for i in range(number_of_pages):
# Produces the request and saves the response
response=requests.get(url, params=parameters)
# Create a Soup object
soup = BeautifulSoup(response.text, 'html.parser')
# count 30 pages
#count = 1
print('Extract data ... page {} from {}'.format(int(pagination/30)+1, number_of_pages))
# Extract the info
for a in soup.findAll('div', attrs={'class':'media-story'}):
# Validate if the info exists
try:
address = a.find('address').text.strip()
except:
address = ''
try:
name = a.find('a', attrs={'class':'biz-name js-analytics-click'}).text
except:
name = ''
try:
reviews = a.find('span', attrs={'class':'review-count rating-qualifier'}).text.strip().split()[0]
except:
reviews = ''
try:
phone = a.find('span', attrs={'class':'biz-phone'}).text.strip()
except:
phone = ''
try:
neighborhood = a.find('span', attrs={'class':'neighborhood-str-list'}).text.strip()
except:
neighborhood = ''
# Saves info in a dictionary
info = {
'name' : name,
'reviews' : reviews,
'phone' : phone,
'address' : address,
'neighborhood' : neighborhood
}
# saves dictionary in the data variable
data.append(info)
# Each page shows 30 items, but the Soup Object find more (garbage)
# To avoid some erros in processing, This code guarantees that only 30 items will be scraped
#count += 1
#if count == 30: break
# Increment the start item ID (pagination)
pagination+=30
parameters['start'] = pagination
###Output
Extract data ... page 1 from 34
Extract data ... page 2 from 34
Extract data ... page 3 from 34
Extract data ... page 4 from 34
Extract data ... page 5 from 34
Extract data ... page 6 from 34
Extract data ... page 7 from 34
Extract data ... page 8 from 34
Extract data ... page 9 from 34
Extract data ... page 10 from 34
Extract data ... page 11 from 34
Extract data ... page 12 from 34
Extract data ... page 13 from 34
Extract data ... page 14 from 34
Extract data ... page 15 from 34
Extract data ... page 16 from 34
Extract data ... page 17 from 34
Extract data ... page 18 from 34
Extract data ... page 19 from 34
Extract data ... page 20 from 34
Extract data ... page 21 from 34
Extract data ... page 22 from 34
Extract data ... page 23 from 34
Extract data ... page 24 from 34
Extract data ... page 25 from 34
Extract data ... page 26 from 34
Extract data ... page 27 from 34
Extract data ... page 28 from 34
Extract data ... page 29 from 34
Extract data ... page 30 from 34
Extract data ... page 31 from 34
Extract data ... page 32 from 34
Extract data ... page 33 from 34
Extract data ... page 34 from 34
###Markdown
Here is some part of the data extract:
###Code
data[:5]
###Output
_____no_output_____
###Markdown
Now let's convert the data to a DataFrame object:
###Code
from pandas import DataFrame
restaurants = DataFrame(data)
###Output
_____no_output_____
###Markdown
The results is:
###Code
# Show the ten firsts items
restaurants.head(10)
###Output
_____no_output_____
###Markdown
Data CleaningBefore saves the data into a file, let's check if there is any null, empty or duplicate value:
###Code
# Check if there is any null value in DataFrame
restaurants.isnull().any().any()
# Check if there is any empty value in DataFrame
no_address = len(restaurants[restaurants['address'] == ''])
no_name = len(restaurants[restaurants['name'] == ''])
no_neighborhood = len(restaurants[restaurants['neighborhood'] == ''])
no_phone = len(restaurants[restaurants['phone'] == ''])
no_reviews = len(restaurants[restaurants['reviews'] == ''])
print('Column \t\t empty values')
print('address \t ', no_address)
print('name \t\t ', no_name)
print('neighborhood \t ', no_neighborhood)
print('phone \t\t ', no_phone)
print('reviews \t ', no_reviews)
###Output
Column empty values
address 122
name 99
neighborhood 106
phone 119
reviews 99
###Markdown
As presented above, there is a lot of empty values in the data. This happens because the Yelp site does not have all the information for some restaurants.Let's remove these items:
###Code
restaurants = restaurants[restaurants['address'] != '']
restaurants = restaurants[restaurants['name'] != '']
restaurants = restaurants[restaurants['neighborhood'] != '']
restaurants = restaurants[restaurants['phone'] != '']
restaurants = restaurants[restaurants['reviews'] != '']
# Check if there is any duplicate value in DataFrame
restaurants[restaurants.duplicated()]
###Output
_____no_output_____
###Markdown
So there are various duplicated items. Most of them are about the *Little Baobab* restaurant, this happens because Yelp can promote some restaurants (Advertisement), so these promoted restaurants are shown in all pages, and the scraping script captures them several times.Let's remove duplicate items:
###Code
restaurants.drop_duplicates(inplace=True)
###Output
_____no_output_____
###Markdown
For a better presentation, let's change the columns order in our dataset.
###Code
restaurants = restaurants[['name', 'neighborhood', 'address', 'phone', 'reviews']]
restaurants.head(5)
###Output
_____no_output_____
###Markdown
At the end, we have 949 restaurants in our dataset.The last step is to convert the columns for the correct type.
###Code
restaurants.dtypes
###Output
_____no_output_____
###Markdown
We can see that the ***reviews*** column is treated as a string type (object), let's convert it to a numeric type (int):
###Code
restaurants['reviews'] = restaurants['reviews'].astype(int)
restaurants.dtypes
###Output
_____no_output_____
###Markdown
Finally, let's save the data into a CSV file:
###Code
restaurants.to_csv('datasets/yelp_restaurants_sanFranscisco.csv', index=False)
###Output
_____no_output_____
###Markdown
Data AnalysisNow I present a simple analysis of the restaurant's data. Top 10 reviewed restaurantsLet's see the ten restaurants with more reviews:
###Code
import matplotlib.pyplot as plt
import numpy as np
% matplotlib inline
# Get top 10 restaurants
top10 = restaurants[['name', 'reviews']].sort_values('reviews',ascending=False).head(10).sort_values('reviews')
# Plot
top10.plot.barh(legend=False)
plt.yticks(np.arange(10), top10.name.values)
plt.ylabel('RESTAURANT')
plt.xlabel('REVIEWS')
plt.show()
###Output
_____no_output_____
###Markdown
The *Brenda's French Soul Food* is the restaurant with more reviews. Neighborhood with more restaurantsLet's take the 10 neighborhoods with more restaurants in San Francisco.
###Code
top_neighbor = restaurants['neighborhood'].value_counts().head(10)
top_neighbor.plot.barh(figsize=(7,5))
plt.xlabel('NUMBER OF RESTAURANTS')
plt.ylabel('NEIGHBORHOOD')
plt.show()
###Output
_____no_output_____
###Markdown
The *Mission* is the neighborhood with more restaurants. But if we consisder all restaurants, the proportion (%) is:
###Code
(restaurants['neighborhood'].value_counts(normalize=True).head(10)*100).plot.barh(figsize=(7,5))
plt.xlabel('PROPORTION (%)')
plt.ylabel('NEIGHBORHOOD')
plt.show()
###Output
_____no_output_____
###Markdown
Sentiment Analysis on Open Yelp Dataset Reviews
###Code
import os
os.chdir('./colab')
!pwd
###Output
/Users/kode/Desktop/yelp-review-dataset-nlp/colab
###Markdown
Artificial Intelligence Project 2020/2021 This is the notebook made by Mario Sessa for the project of Artificial Intelligence 2020/2021 on Yelp Sentiment Analysis with binary classification. The official documentation is in the same project directory with the function to explain in a theoretical way decisions taken during the project development. The entire project is based on the model taken from references papers, machine learning framework used is Tensorflow 2.6.0. Package dependencies will be deploy on a file requirements.txt and use explain in the README.md for possible dependencies problem. Load Dataset Project dataset is the Academic Yelp Dataset, available on https://www.yelp.com/dataset. This dataset has a huge dimensions for the reviews table, so we will load data using chunks method and data segmentation using data types given by the yelp dataset documentations. In this notebook we will: 1. Load dataset2. Data analysis3. Pre-processing4. Multinomial Naive Bayes model5. Deep learning on biLSTM and LSTM models6. BERT model7. Store models in memory8. Results and conclusionDuring every phase, we describe some procedures which we follows to have a clean vision of what we did in the project.
###Code
import numpy as np
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
rtypes = { "stars": np.float16,
"useful": np.int32,
"funny": np.int32,
"cool": np.int32,
}
chunk_size = 10000
path = './data/yelp_academic_dataset_review.json'
def read_chunk(path, rtypes, chunk_size):
review = pd.read_json(path, lines=True,
orient="records",
dtype=rtypes,
chunksize=chunk_size)
chunk_list = []
for chunk_review in review:
chunk_list.append(chunk_review)
df = pd.concat(chunk_list, ignore_index=True, axis=0)
return df
df = read_chunk(path, rtypes, chunk_size)
df.head()
###Output
_____no_output_____
###Markdown
Data Analysis In this section we will discuss shortly properties of the dataset which can be useful for one of more models. Apparently, we need to analyze the domain of each column to obtain main statistics and define how to proceed in the next steps.
###Code
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.lines as lines
%matplotlib inline
df['stars'] = df['stars'].astype(int)
df['texts_length'] = df['text'].str.len()
np.asarray(df['stars'])
###Output
_____no_output_____
###Markdown
Stars Stars analysis is based on the values and types distribution and possible correlation with other features. According with the official documentation, the domain of stars is a value between 1 (worst) to 5 (best). Generally, people have different evaluation metrics which may make our problem harder to classify. The aim of the analysis is to define the 'next steps' procedures in according to obtained results.
###Code
bin_df = pd.DataFrame()
bin_df['stars'] = [0 if star <= 3 else 1 for star in df['stars']]
def stars_distribution_plot(stars, title, plot_type):
plt.figure(figsize=(4,4))
if plot_type == 1:
sns.distplot(stars)
else:
stars.value_counts().plot.bar(color='blue')
plt.title(title)
def stars_length_plot(dataframe):
graph = sns.FacetGrid(data = dataframe, col = 'stars')
graph.map(plt.hist, 'texts_length', bins=50, color='blue')
def stars_correlation_heatmap(df):
sns.heatmap(df.corr(), annot=True, cmap="YlGnBu")
stars_distribution_plot(df.stars, 'Stars values distribution',1)
###Output
_____no_output_____
###Markdown
As we can see from the below plot, the distribution of stars value are not in equal number. This should be a problem during model training because we need to flat the distribution for a correct balanced features analysis.
###Code
stars_distribution_plot(bin_df.stars, 'Stars types distribution', 0)
###Output
_____no_output_____
###Markdown
As the unbalanced values distribution, the well dependent problem is propagated to reviews types too.
###Code
stars_length_plot(df)
###Output
_____no_output_____
###Markdown
Text length distribution across reviews with different stars values is almost the same. So we don't have a direct connection between the text length and the stars value. According to the previous data, we can display a correlation heatmap to estabilish possible correlations with other types of features.
###Code
stars_correlation_heatmap(df)
###Output
_____no_output_____
###Markdown
There is not some correlations between text length or stars and the secondary features. Generally, when a customers or a client try to do a review about a service or a place, they can evaluate in according on how cool a place it, if the experience he or she did was funny or if the service is useful. Otherwise, inside the yelp reviews, these parametes were ignored by reviews creators, so we will not consider them in the model training. Texts
###Code
from wordcloud import WordCloud
from nltk.probability import FreqDist
from gensim.parsing.preprocessing import remove_stopwords
from gensim.parsing.preprocessing import STOPWORDS
from nltk.tokenize import word_tokenize
###Output
_____no_output_____
###Markdown
In the text analysis, our aim is to define properties which can be useful for the text manipulation before the model training and modify them in according to the model approach used. For instance, if we want to use a deep neural network, we need to encode a series of words or sentences and compute them in deep and slow layers (expecially for LSTM or biLSTM usually used for natural language processing which are sequential models)
###Code
def high_words_frequency_plot(df):
input_texts = c_low(df)
word_tokens = word_tokenize(input_texts)
tokens = list()
for word in word_tokens:
if word.isalpha() and word not in STOPWORDS:
tokens.append(word)
token_dist = FreqDist(tokens)
dist = pd.DataFrame(token_dist.most_common(10),columns=['Word', 'Frequency'])
fig = plt.figure(figsize=(7,4))
ax = fig.add_axes([0,0,1,1])
x = dist['Word']
y = dist['Frequency']
ax.bar(x,y)
plt.title('Terms Frequency')
plt.show()
return dist
def wordcloud_plot(df):
subset = df[:int(len(df)/100)]
input_text = c_low(subset['text'])
wordCloud = WordCloud(background_color='white', stopwords=STOPWORDS).generate(input_text)
plt.imshow(wordCloud, interpolation='bilinear')
plt.axis('off')
plt.show()
def c_low(texts):
return ' '.join(texts).lower()
def count_words(texts):
map_list = list()
map_terms = dict()
for text in texts:
word_tokens = word_tokenize(text)
for word in word_tokens:
if word not in map_terms:
map_terms[word] = 1
map_list.append(len(map_terms))
map_terms = dict()
return map_list
def stopwords_counts(texts):
map_sw = dict()
map_list = list()
for text in texts:
word_tokens = word_tokenize(text)
for word in word_tokens:
if word in STOPWORDS:
map_sw[word] = 1
map_list.append(len(map_sw))
map_sw = dict()
return map_list
%%time
limit = 100000
df_plot = pd.DataFrame(columns=['terms_counts', 'stopwords_counts'])
df_plot['terms_counts'] = count_words(df['text'][:limit])
df_plot['stopwords_counts'] = stopwords_counts(df['text'][:limit])
df_occurrences = pd.DataFrame(columns=['Terms Occurrences', 'Stopwords Occurrences'])
df_occurrences['Terms Occurrences'] = df_plot['terms_counts']
df_occurrences['Stopwords Occurrences'] = df_plot['stopwords_counts']
df_occurrences.plot(kind="kde")
plt.title('Terms and stopwords occurrences distribution')
print(f"The mean words cardinality in yelp reviews is: {int(df_plot['terms_counts'].mean())}.")
print(f"The mean stopwords cardinality in yelp reviews is: {int(df_plot['stopwords_counts'].mean())}")
###Output
The mean words cardinality in yelp reviews is: 78.
The mean stopwords cardinality in yelp reviews is: 28
###Markdown
For views semplification we taken only the first 100.000 values. The mean value is lower than 100, so we need to consider at least 100 words in the text of each review. The number of words per reviews will be reduce for some long-computational models thanks to the stopwords removing in a pre-processing phase.
###Code
stopwords = df_plot['stopwords_counts']
terms = df_plot['terms_counts']
df_plot['stopwords_ratio'] = stopwords / terms
df_plot['stopwords_ratio'].plot(kind='kde')
plt.title('Stopwords ratio distribution')
print(f"The stopwords ratio for each yelp reviews is: {int(df_plot['stopwords_ratio'].mean()*100)}%")
###Output
The stopwords ratio for each yelp reviews is: 36%
###Markdown
The density on the stopwords ratio is around 36%, so if we reduce the texts content removing these kind of terms we will drastically increase training time on complex or slow models like deep neural network. However, the level of precision can be reduced for a less terms variance in other kind of model which can consider the dependency treebanks.
###Code
dist = high_words_frequency_plot(df['text'][:limit])
###Output
_____no_output_____
###Markdown
As we can see in the wordcloud or in the terms frequency plot upside, most used words are references to conceptual terms which can guarantee a correct binary classification. As we saw before, most common terms would be positive for the huge difference between negative and positive terms.
###Code
wordcloud_plot(df)
###Output
_____no_output_____
###Markdown
Wordcloud is a visible classification of symbols and words. In our case, it is good to show us how the common terms are distributed inside reviews text. Pre-processing This phase consists in:1. Clean dataset from unused features2. Transform stars domain in a binary one3. Balancing dataset and return a balanced subset in output4. Reduce texts in lowercase5. PoS Tagging and lemmatize texts6. Remove stopwords and punctuation marksOther ad-hoc pre-processing operations are done before the models trainings. Data clenaning
###Code
df_balanced = df.drop(['review_id', 'user_id', 'business_id', 'useful', 'funny', 'cool','texts_length', 'date'], axis=1)
df_balanced['text'].dropna(inplace=True)
print(f'{df_balanced.info()}')
print('----------------------')
print(f'{df_balanced.head()}')
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 8635403 entries, 0 to 8635402
Data columns (total 2 columns):
# Column Dtype
--- ------ -----
0 stars int64
1 text object
dtypes: int64(1), object(1)
memory usage: 131.8+ MB
None
----------------------
stars text
0 4 Apparently Prides Osteria had a rough summer a...
1 4 This store is pretty good. Not as great as Wal...
2 5 I called WVM on the recommendation of a couple...
3 2 I've stayed at many Marriott and Renaissance M...
4 4 The food is always great here. The service fro...
###Markdown
Stars polarization and dataset balancing
###Code
from collections import Counter
def polarize_stars(stars):
return [0 if star <= 3 else 1 for star in stars]
def balance_dataframe(limit):
balancedTexts = []
balancedLabels = []
negPosCounts = [0, 0]
for i in range(0,len(texts)):
polarity = stars[i]
if negPosCounts[polarity] < limit:
balancedTexts.append(texts[i])
balancedLabels.append(stars[i])
negPosCounts[polarity] += 1
df_balanced = pd.DataFrame()
df_balanced['text'] = balancedTexts
df_balanced['labels'] = balancedLabels
return df_balanced
texts = df['text']
stars = polarize_stars(df['stars'])
df_balanced = balance_dataframe(200000)
print(f'Length of the dataframe : {len(df_balanced)}')
print(f'Mean star values : {df_balanced.labels.mean()}')
counter = Counter(df_balanced['labels'])
print(f'Positive reviews : {counter[1]}')
print(f'Positive reviews : {counter[0]}')
###Output
Length of the dataframe : 400000
Mean star values : 0.5
Positive reviews : 200000
Positive reviews : 200000
###Markdown
Reduce texts in lowercase
###Code
def lower_case(df_text):
return [review_text.lower() for review_text in df_text]
df_balanced['text'] = lower_case(df_balanced['text'])
###Output
_____no_output_____
###Markdown
Dataset length has 400.000 rows with no unused features; a label identify the star value polarization which is 1 if the stars value is great than 3 and 0 otherwise and the text is in lowercase. These binary values can represent the positive and negative label for the binary classification in the sentiment analysis process did in the next phase. The next steps in the current pre-processig procedure will include the removes of the stopwords and the lemmatization of texts. Not every model needs the remove stopwords because they should be useful for context analysis if a trained model consider semantic features. So we save a balanced dataset state before the stopword removing operation.
###Code
_df_balanced = df_balanced
###Output
_____no_output_____
###Markdown
PoS Tagging and Lemmatization
###Code
import nltk
from nltk.corpus import wordnet
from nltk.stem import WordNetLemmatizer
###Output
_____no_output_____
###Markdown
It is refered to the analysis of text, define PoS tagging and doing generation of lemmas. The lemmatization involves an onerous procedure on the balanced dataset. Particularly, we will associate a semantic tag to each word in order to differentiate which type of term is being analyzed and, according to the semantic class to which it belongs, carry out an ad-hoc lemmatization in order to obtain the correct normalized form.
###Code
%%time
lemmatizer = WordNetLemmatizer()
def word_tagger(nltk_tag):
if nltk_tag.startswith('J'):
return wordnet.ADJ
elif nltk_tag.startswith('V'):
return wordnet.VERB
elif nltk_tag.startswith('N'):
return wordnet.NOUN
elif nltk_tag.startswith('R'):
return wordnet.ADV
else:
return None
def lemmatize(texts):
df_texts = []
for text in texts:
word_tagged = nltk.pos_tag(nltk.word_tokenize(text))
map_word_tag = list(map(lambda x: (x[0], word_tagger(x[1])), word_tagged))
lemmatized_text = []
for word, tag in map_word_tag:
if tag is None:
lemmatized_text.append(word)
else:
lemmatized_text.append(lemmatizer.lemmatize(word, tag))
lemmatized_text = " ".join(lemmatized_text)
df_texts.append(lemmatized_text)
return df_texts
df_balanced['text'] = lemmatize( df_balanced['text'])
print(f"It is the not lemmatized text: {_df_balanced['text'][0]}")
print(f"It is the lemmatized text : {df_balanced['text'][0]}")
__df_balanced = df_balanced['text']
###Output
_____no_output_____
###Markdown
Removing stopwords
###Code
def rm_sw_and_punct(texts_list):
# removing stopwords
tmp_texts = []
for text in texts_list:
tmp_texts.append(remove_stopwords(text))
texts_list = tmp_texts
# removing not alfanumeric characters
tmp_texts = []
for text in texts_list:
tmp_texts.append(''.join(ch for ch in text if ch.isalnum() or ch == ' '))
texts_list = tmp_texts
return texts_list
df_balanced['text'] = rm_sw_and_punct(df_balanced['text'])
###Output
_____no_output_____
###Markdown
The result of the pre-processing it's a cleaned text with alphanumeric characters and lemmatized words, this is the minimization of the not-conceptual variation of a text form. In other words, we can define now a vectorization which use an encoded algorithm to transform terms in a sentence in a sequence of integers based on different metrics. As we can see in the next steps, not every method will use the text input using the conceptual reduction (stopwords removing and lemmatized text). In conclusion, we don't tokenize the elements right now but in the next step; that's because are particularly interested in the optimization of models; so we can do an ad-hoc vectorization or encoded tecniques which is good for the model that use it.
###Code
import tensorflow as tf
import os
print(tf.sysconfig.get_compile_flags())
print(tf.__version__)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
###Output
['-I/usr/local/lib/python3.9/site-packages/tensorflow/include', '-D_GLIBCXX_USE_CXX11_ABI=0', '-DEIGEN_MAX_ALIGN_BYTES=64']
2.6.0
###Markdown
Supervised Learning
###Code
import time
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.metrics import f1_score, accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
###Output
_____no_output_____
###Markdown
Count vectorizer Counting vectorizer convert a collection of texts like our reviews in a matrix of token counts. If we want to manage it in a good way for split the dataset in training and testig set. We need to use a panda dataframe which includes for each column a token and for each row a series of counting values which represent the occurrences of tokens for each text in the dataset.
###Code
vect = CountVectorizer(max_features=300)
vect.fit(df_balanced.text)
Xc = vect.transform(df_balanced.text)
Xc_df = pd.DataFrame(Xc.toarray(), columns=vect.get_feature_names())
yc = df_balanced.labels
Xc = Xc_df
Xc_train, Xc_test, yc_train, yc_test = train_test_split(Xc, yc, test_size=0.3, random_state=42, stratify=yc)
print(f" Text: {df_balanced.text[0]}")
print(f" Occurrences of table: {Xc_df['table'][0]}")
###Output
Text: apparently pride osteria rough summer evidence dining room 630 friday night new blood kitchen revitalize food customer recent visit waitstaff warm unobtrusive 8 pm leave bar dining room lively beverly resident prefer later seating read mixed review late little tentative choice luckily worry food department start fried dough burrata prosciutto lovely nt offer half portion pasta order entree size split choose tagliatelle bolognese cheese pasta creamy sauce bacon asparagus grana frita good split secondo special berkshire pork secreto pork skirt steak garlic potato purée romanesco broccoli incorrectly romanesco sauce table receive bread meal reason management capable tenant apartment begin play basketball intervene comped table dessert order apple dumpling gelato tasty portion huge particularly like prefer order course order meal leave hungry depend appetite din room young crowd bar definitely 40 set recommend naysayer return improvement personally nt know glory able compare easy access downtown salem crowd month october
Occurrences of table:2
###Markdown
TF-IDF vectorizer
###Code
vect = TfidfVectorizer(max_features=300)
vect.fit(df_balanced.text)
Xf = vect.transform(df_balanced.text)
Xf_df = pd.DataFrame(Xf.toarray(), columns=vect.get_feature_names())
yf = df_balanced.labels
Xf = Xf_df
Xf_train, Xf_test, yf_train, yf_test = train_test_split(Xf, yf, test_size=0.3, random_state=42, stratify=yf)
print(f" Text: {df_balanced.text[0]}")
print(f" Tf-idf of table:{Xf_df['table'][0]}")
###Output
Text: apparently pride osteria rough summer evidence dining room 630 friday night new blood kitchen revitalize food customer recent visit waitstaff warm unobtrusive 8 pm leave bar dining room lively beverly resident prefer later seating read mixed review late little tentative choice luckily worry food department start fried dough burrata prosciutto lovely nt offer half portion pasta order entree size split choose tagliatelle bolognese cheese pasta creamy sauce bacon asparagus grana frita good split secondo special berkshire pork secreto pork skirt steak garlic potato purée romanesco broccoli incorrectly romanesco sauce table receive bread meal reason management capable tenant apartment begin play basketball intervene comped table dessert order apple dumpling gelato tasty portion huge particularly like prefer order course order meal leave hungry depend appetite din room young crowd bar definitely 40 set recommend naysayer return improvement personally nt know glory able compare easy access downtown salem crowd month october
Tf-idf of table:0.19502123479069944
###Markdown
Logistic Regression Logistic regression uses an equation as the representation, very much like linear regression.Input values $ x $ are combined linearly using weights or coefficient values to predict an output value $ y $. A key difference from linear regression is that the output value being modeled is a binary values (0 or 1) rather than a numeric value.Below is an example logistic regression equation:$ y = 1 / (1 + e^{-(\beta_{0} + \beta_{1}x)}) $Where y is the predicted output, $\beta_{0}$ is the bias or intercept term and $\beta_{1}$ is the coefficient for the single input value $ x $. Each column in your input data has an associated $\beta$ coefficient (a constant real value) that must be learned from your training data.The actual representation of the model that we would store in memory or in a file are the coefficients in the equation.
###Code
from sklearn.linear_model import LogisticRegression
pre_time = time.time()
log_reg = LogisticRegression().fit(Xc_train, yc_train)
y_pred_lg = log_reg.predict(Xc_test)
post_time = time.time()
print("Logistic Regression on count vectorizer")
print("---------------------------------------")
print(f"Accuracy: {round(accuracy_score(yc_test ,y_pred_lg),4)}")
print(f"F1 Score: {round(f1_score(yc_test, y_pred_lg),4)}")
print(f"Time: {round((post_time - pre_time), 4)} s\n\n")
pre_time = time.time()
log_reg = LogisticRegression().fit(Xf_train, yf_train)
y_pred_lg = log_reg.predict(Xf_test)
post_time = time.time()
print("Logistic Regression on count vectorizer")
print("---------------------------------------")
print(f"Accuracy: {round(accuracy_score(yf_test ,y_pred_lg),4)}")
print(f"F1 Score: {round(f1_score(yf_test, y_pred_lg),4)}")
print(f"Time: {round((post_time - pre_time), 4)} s")
###Output
Logistic Regression on count vectorizer
---------------------------------------
Accuracy: 0.8235
F1 Score: 0.8244
Time: 9.1424 s
Logistic Regression on count vectorizer
---------------------------------------
Accuracy: 0.8239
F1 Score: 0.8228
Time: 5.6366 s
###Markdown
Naive Bayes MultinomialNB implements the naive Bayes algorithm for multinomially distributed data, and is one of the two classic naive Bayes variants used in text classification (where the data are typically represented as word vector counts, although tf-idf vectors are also known to work well in practice.The distribution is parametrized by vector $\theta_{y} = (\theta_{y1}, ..., \theta_{yn}) $ which defines the class type $ y $ with $ n $ features depending on the size of vocabulary. Given $ \theta_{yi} $ as the probability $ P(x_{i} | y) $ of feature $ i $ appearing in a sample belonging to class $ y $, we can estimate $ \theta{y} $ vector by a smoothed version of maximum likelihood as frequency counting or tf-idf parameter.The classification aim is to define the probability to have a success assignment of a series of words $ x_{1},...,x_{n} $ to a class $ y $. In other words, we want to calculate:$ ~{y} = argmax_{y} P(y)\prod_{i=1}^{n}P(x_{i} | y) $$ argmax $ identify the class $ y $ which gives the max value of $ P(y) $
###Code
from sklearn.naive_bayes import MultinomialNB
pre_time = time.time()
nb_classifier = MultinomialNB()
nb_classifier.fit(Xf_train, yf_train)
yf_pred = nb_classifier.predict(Xf_test)
post_time = time.time()
print("Naive Bayes on tf-idf vectorizer")
print("--------------------------------")
print(f"Accuracy: {round(accuracy_score(yf_test ,yf_pred),4)}")
print(f"F1 Score: {round(f1_score(yf_test, yf_pred),4)}")
print(f"Time: {round((post_time - pre_time), 4)} s\n\n")
pre_time = time.time()
nb_classifier = MultinomialNB()
nb_classifier.fit(Xc_train, yc_train)
yc_pred = nb_classifier.predict(Xc_test)
post_time = time.time()
print("Naive Bayes on count vectorizer")
print("--------------------------------")
print(f"Accuracy: {round(accuracy_score(yc_test ,yc_pred),4)}")
print(f"F1 Score: {round(f1_score(yc_test, yc_pred),4)}")
print(f"Time: {round((post_time - pre_time), 4)} s")
###Output
Naive Bayes on tf-idf vectorizer
--------------------------------
Accuracy: 0.7983
F1 Score: 0.8005
Time: 0.3006 s
Naive Bayes on count vectorizer
--------------------------------
Accuracy: 0.7902
F1 Score: 0.8005
Time: 3.2127 s
###Markdown
Deep learning on LSTM models
###Code
from sklearn.metrics import f1_score
from tensorflow import keras
from keras.models import load_model
###Output
_____no_output_____
###Markdown
Deep learning approaches will use some layers tecniques based on the computation and learning on texts dataset as a numerical representation. To do that, we have in input the series of the texts and the respective label to be processed.
###Code
df_balanced.head()
###Output
_____no_output_____
###Markdown
Callbacks and additional metrics
###Code
es_loss_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, verbose=0)
###Output
_____no_output_____
###Markdown
Dataset splitting
###Code
X_train, X_test, y_train, y_test = train_test_split(df_balanced['text'], df_balanced['labels'],
test_size = 0.33,
shuffle=True,
random_state = 42)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.1, random_state=1)
###Output
_____no_output_____
###Markdown
Making vocabulary
###Code
def count_words(texts):
map_terms = dict()
for text in texts:
for word in text:
if word not in map_terms:
map_terms[word] = 1
return len(map_terms)
count = count_words(df_balanced['text'])
print(f'There are {count} different words')
vocab_size = count_words(df_balanced['text'])
encoder = tf.keras.layers.TextVectorization(
max_tokens=vocab_size)
encoder.adapt(df_balanced['text'])
###Output
_____no_output_____
###Markdown
Making model The proposed model is the classic `Sequential` one, which has a initial encoder given by our `TextVectorizer` which transforms a sequence of words in a sequence of numerical values according to the deep learning procedures. On the next, there is an `Embedding` layer which has an input dimension equals to the sequence of word indexes and has a sequences of trainable vectors as output. We put the mask_zero to handle varying of the sequence length. Using `Bidirectional` will run our inputs in two ways, one from past to future and one from future to past and what differs this approach from unidirectional is that in the LSTM that runs backwards you preserve information from the future and using the two hidden states combined you are able in any point in time to preserve information from both past and future. In other words, we can propagate the input in forwards and backwards directions, combine the outputs for each sentence term and train informations. Finally, we have `Dense` layers which takes values of the encoded word vector and calculte the respective output to show in the prediction. We have used te binary cross-entropy which calculate the loss distribution as a differences between the true labels distibution and the predicted one. Furthermore, we used the Adam optimizers which is considered the best optimizer for the gradient descent estimation.
###Code
def bilstm_model():
model = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
return model
model = bilstm_model()
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
text_vectorization (TextVect (None, None) 0
_________________________________________________________________
embedding (Embedding) (None, None, 64) 139328
_________________________________________________________________
bidirectional (Bidirectional (None, 128) 66048
_________________________________________________________________
dense (Dense) (None, 64) 8256
_________________________________________________________________
dense_1 (Dense) (None, 1) 65
=================================================================
Total params: 213,697
Trainable params: 213,697
Non-trainable params: 0
_________________________________________________________________
###Markdown
The first None of the previous summary shapes identify the batch_size which has this default value. We will not use batch_size in this model so, the dimension will continue to be None. The second dimension of the first and second layer identify the text size which is in according with the integer sequences (word vector) of input samples. Training Phase
###Code
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_val, y_val),
validation_steps=30,
callbacks = [es_loss_callback])
test_loss, test_acc = model.evaluate(X_test, y_test)
def prediction_score(pred):
results = []
for x in pred:
if x > 0:
x = 1
else:
x = 0
results.append(x)
return results
pred = model.predict(X_test)
print("Deep Neural Network using biLSTM")
print("--------------------------------")
print(f"Accuracy : {round(test_acc,4)}")
print(f"Loss : {round(test_loss,4)}")
print(f"F1 Score : {round(f1_score(list(y_test),prediction_score(pred)),4)}")
###Output
Deep Neural Network using biLSTM
--------------------------------
Accuracy : 0.8738
Loss : 0.2908
F1 Score : 0.8777
###Markdown
Results
###Code
def plot_graphs(history, metric):
plt.plot(history.history[metric])
plt.plot(history.history['val_'+metric], '')
plt.xlabel("Epochs")
plt.ylabel(metric)
plt.legend([metric, 'val_'+metric])
plt.figure(figsize=(16, 8))
plt.subplot(1, 2, 1)
plot_graphs(history, 'accuracy')
plt.ylim(None, 1)
plt.subplot(1, 2, 2)
plot_graphs(history, 'loss')
plt.ylim(0, None)
###Output
_____no_output_____
###Markdown
Obtained the model, we can see the results during the training phase. Training accuracy was increasing slowly, meanwhile validation accuracy has a mean value around 86%. About the loss value, training loss was decreasing during the epochs successions, meanwhile the validation loss is at least 20%, we put only the 10% of the training set as a validation set, if we will change the training and validation ratio, model may obtain better results. Model Variations The initial idea is to add a `Conv1D` and a `MaxPooling1D` between the `Embedding` layer and the `Bidirectional` one. The aim of the convolution, which is typically used for image and not for text, is to find some previous features before the `Bidirectional` sequential approach based on `LSTM`. Logic behind `Conv1D` is to use a `kernel_size` of 2, so we have a window of 2 cells because is 1-dimension convolution procedure; combining with the `strides` at $ 1 $, the filter will move one pixel, or unit, at a time; it means that we consider a `Linear` analysis on the embedding input much faster than a `Linear` layer analysis with the same precision on the 2-terms association. We defined a `kernel_size` at $ 2 $ because many neighbours can be associated each other like adjective and nouns. Typically, using a convolutional or linear approach can reach this kind of features.
###Code
def conv_bilstm_model():
model = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
mask_zero=True),
tf.keras.layers.Conv1D(64, kernel_size=2, padding='valid', use_bias=True, strides=1),
tf.keras.layers.MaxPooling1D(),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='sigmoid'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1)
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
return model
model = conv_bilstm_model()
model.summary()
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_val, y_val),
validation_steps=30,
callbacks = [es_loss_callback])
test_loss, test_acc = model.evaluate(X_test, y_test)
pred = model.predict(X_test)
print("Deep Neural Network using convolutional biLSTM")
print("--------------------------------")
print(f"Accuracy : {round(test_acc,4)}")
print(f"Loss : {round(test_loss,4)}")
print(f"F1 Score : {round(f1_score(list(y_test),prediction_score(pred)),4)}")
plt.figure(figsize=(16, 8))
plt.subplot(1, 2, 1)
plot_graphs(history, 'accuracy')
plt.ylim(None, 1)
plt.subplot(1, 2, 2)
plot_graphs(history, 'loss')
plt.ylim(0, None)
###Output
_____no_output_____
###Markdown
Betweent the previous model and the current one there is not much differences. Using of the convolution layer has the aim to have an initial improvement given by the 2-gram defined by the convolution kernel and the slices at $ 1 $ is important to connect couple of consecutive words to connect them and find some features used in the bidirectional LSTM. About the results obtained, we can see how the validation set is better than the previous modeel with a starting $ 87\% $ of accuracy and return in the next steps to the previous distribution. Loss function didn\'t have any difference and it's around the same values of the previous distribution. A problem with this model is overfitting which is reduced by callbacks event and dropout layers.
###Code
cnn = model
###Output
_____no_output_____
###Markdown
Alternative model Ad other approach different from the use of `Conv1D` layer, is adding two `Bidirectional` layers where the first one has the `return_sequences` at $ True $ which means that the output still has 3-axes, like the input, so it can be passed to another RNN layer, that's because return every output from each forward and backward LSTM cell.
###Code
def double_bilstm_model():
model = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(len(encoder.get_vocabulary()), 64, mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Dense(1, activation="sigmoid")
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
return model
model = double_bilstm_model()
model.summary()
history = model.fit(X_train, y_train, epochs=10, batch_size=54,
validation_data=(X_val, y_val),
validation_steps=30,
callbacks = [es_loss_callback])
test_loss, test_acc = model.evaluate(X_test, y_test)
pred = model.predict(X_test)
print("Deep Neural Network using double biLSTM")
print("--------------------------------")
print(f"Accuracy : {round(test_acc,4)}")
print(f"Loss : {round(test_loss,4)}")
print(f"F1 Score : {round(f1_score(list(y_test),prediction_score(pred)),4)}")
plt.figure(figsize=(16, 8))
plt.subplot(1, 2, 1)
plot_graphs(history, 'accuracy')
plt.ylim(None, 1)
plt.subplot(1, 2, 2)
plot_graphs(history, 'loss')
plt.ylim(0, None)
###Output
_____no_output_____
###Markdown
Results for the double bidirectional LSTM is good, validation and training accuracy is pretty similar until the 8th epoch where the validation accuracy dropped down until the final increment, generally the accuracy remain around the 87% as well as the testing accuracy. Loss value is constant for the validation set around the 29%, meanwhile the loss function for the testing set is continuously decreasing.
###Code
seq_bilstm = model
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import re
import warnings
warnings.filterwarnings("ignore")
data = pd.read_csv('yelp-reviews-dataset/yelp.csv')
data.head(2)
data.shape
#Sorting the date in ascending order
data.sort_values(['date'], ascending=True,inplace=True)
data.reset_index(inplace =True)
#Applying the function to reduce the star column in binary values
data['sentiments'] = data['stars'].apply(lambda x:1 if x==3 or x==4 or x==5 else 0)
X = data[['date','review_id','text','type','cool','useful','funny']]
y = data[['sentiments']]
###Output
_____no_output_____
###Markdown
Text Cleaning
###Code
def decontracted(phrase):
# specific
phrase = re.sub(r"won’t", "will not", phrase)
phrase = re.sub(r"can’t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase
X['text'].apply(decontracted)
def stripunc(data):
return re.sub('[^A-Za-z]+', ' ', str(data), flags=re.MULTILINE|re.DOTALL)
X['text'].apply(stripunc)
X=X[['text','type','cool','useful','funny']]
###Output
_____no_output_____
###Markdown
Splitting
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state =0, shuffle =False)
###Output
_____no_output_____
###Markdown
Vectorizer
###Code
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(input='content',ngram_range=(1,1))
train = vectorizer.fit_transform(X_train['text'])
train[0].data
train[0].nonzero()
test = vectorizer.transform(X_test['text'])
train.shape
test.shape
import seaborn as sb
sb.heatmap(data.isnull())
###Output
_____no_output_____
###Markdown
Logistic regression
###Code
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(n_jobs =-1)
classifier.fit(train,y_train)
y_pred = classifier.predict(test)
from sklearn.metrics import accuracy_score
accuracy_score(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Hyperparamter tunning
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
params = {'penalty':['l1','l2'],'C':[10**x for x in range(-4,5,1)]}
model = GridSearchCV(LogisticRegression(n_jobs=-1), param_grid=params,cv=5, n_jobs=-1, scoring='accuracy',return_train_score=True)
model.fit(train, y_train)
model.best_params_
model.best_estimator_
train_pred = model.best_estimator_.predict(train)
test_pred = model.best_estimator_.predict(test)
from sklearn import metrics
print(metrics.accuracy_score(y_train, train_pred))
print(metrics.accuracy_score(y_test, test_pred))
cv_r = model.cv_results_
model.best_estimator_
model.best_score_
CV = pd.DataFrame(cv_r)
CV.head(3)
###Output
_____no_output_____
###Markdown
Visualization
###Code
mean = CV['mean_test_score']
mt= CV['mean_train_score']
c = CV['param_C']
c2 = c[1::2]
test_l2 = mean[1::2]
test_l1 = mean[0::2]
c1 = c[0::2]
train_l2 = mt[1::2]
train_l1 = mt[0::2]
c = list(map(lambda x: np.log10(x),c2))
c1 = list(map(lambda x: np.log10(x),c1))
# plt.semilogx(c2,l2,'o-',color="g")
plt.plot(c,test_l2,'o-',color="g",label="Test l2")
plt.plot(c,train_l2,'o-',color="r", label="Train l2")
plt.scatter(c,test_l2)
plt.grid()
plt.legend()
plt.xlabel("C-values")
plt.ylabel("l2 regularizer")
plt.title("L2 Regularizer")
# plt.semilogx(c2,l2,'o-',color="g")
plt.plot(c1,test_l1,'o-',color="g",label="Test l1")
plt.plot(c1,train_l1,'o-',color="r", label="Train l1")
plt.scatter(c1,test_l1)
plt.grid()
plt.legend()
plt.xlabel("C-values")
plt.ylabel("l2 regularizer")
plt.title("L2 Regularizer")
from sklearn.metrics import confusion_matrix
confusion_matrix(y_pred, y_test)
import seaborn as sns
sns.heatmap(cm)
###Output
_____no_output_____
###Markdown
KNN Classifier
###Code
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=5)
neigh.fit(train, y_train)
y_pred = neigh.predict(test)
from sklearn.metrics import accuracy_score
accuracy_score(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Hyperparameter tunning
###Code
from sklearn.model_selection import GridSearchCV
params = {'n_neighbors':[1,3,5,7,9,11,13,15,17,19]}
model = GridSearchCV(KNeighborsClassifier(), param_grid=params, scoring='accuracy', n_jobs=-1, verbose=5, cv=5,return_train_score=True)
model.fit(train, y_train)
train_pred = model.predict(train)
test_pred = model.predict(test)
from sklearn import metrics
print(metrics.accuracy_score(y_train, train_pred))
print(metrics.accuracy_score(y_test, test_pred))
kn = model.cv_results_
model.best_score_
model.best_params_
k_results = pd.DataFrame(kn)
k_results.head(2)
###Output
_____no_output_____
###Markdown
Visulaization
###Code
test_score = k_results['mean_test_score']
param = k_results['param_n_neighbors']
train_score = k_results['mean_train_score']
# plt.semilogx(c2,l2,'o-',color="g")
plt.plot(param,test_score,'o-',color="g",label="Test_score")
plt.plot(param,train_score,'o-',color="r", label="Train_score")
plt.grid()
plt.legend()
plt.xlabel("Neighbors")
plt.ylabel("Accuracy")
plt.title("KNN_Hyperparameter tunning")
###Output
_____no_output_____
###Markdown
https://www.kaggle.com/c/yelp-recsys-2013/data
###Code
import os
import pandas as pd
import json
import copy
filepath = "/Users/brray/Downloads/yelp_training_set"
###Output
_____no_output_____
###Markdown
User
###Code
user_file = open(os.path.join(filepath, "yelp_training_set_user.json"), "rb")
user_txt = user_file.read()
user_txt[0:1000]
out_flat = []
user_dict = {}
for uline in user_txt.decode("utf-8").split("\n"):
if len(uline) == 0:
continue
uline = json.loads(uline)
uid = uline.pop('user_id')
votes = uline.pop('votes')
uline.update(votes)
uline = {"reviewer_{}".format(k): x for k, x in uline.items()}
user_dict[uid] = uline
uline2 = copy.copy(uline)
uline2['user_id'] = uid
out_flat.append(uline2)
user_dict["CR2y7yEm4X035ZMzrTtN9Q"]
df = pd.DataFrame.from_dict(out_flat)
df.to_csv("yelp_training_set_user.csv")
!head yelp_training_set_user.csv
###Output
,reviewer_average_stars,reviewer_cool,reviewer_funny,reviewer_name,reviewer_review_count,reviewer_type,reviewer_useful,user_id
0,5.0,0,0,Jim,6,user,7,CR2y7yEm4X035ZMzrTtN9Q
1,1.0,0,0,Kelle,2,user,1,_9GXoHhdxc30ujPaQwh6Ew
2,5.0,0,0,Stephanie,2,user,1,8mM-nqxjg6pT04kwcjMbsw
3,5.0,0,0,T,2,user,2,Ch6CdTR2IVaVANr-RglMOg
4,1.0,0,0,Beth,1,user,0,NZrLmHRyiHmyT1JrfzkCOA
5,3.79,36,30,Amy,19,user,45,mWx5Sxt_dx-sYBZg6RgJHQ
6,3.83,31,28,Beach,207,user,130,hryUDaRk7FLuDAYui2oldw
7,3.0,1,1,christine,2,user,0,2t6fZNLtiqsihVmeO7zggg
8,4.5,2,0,Denis,4,user,3,mn6F-eP5WU37b-iLTop2mQ
###Markdown
Business
###Code
business_file = open(os.path.join(filepath, "yelp_training_set_business.json"), "rb")
business_txt = business_file.read()
business_txt[0:1000]
out_flat = []
business_dict = {}
for bline in business_txt.decode("utf-8").split("\n"):
if len(bline) == 0:
continue
bline = json.loads(bline)
bid = bline.pop('business_id')
bline['categories'] = "; ".join(bline['categories'])
bline['neighborhoods'] = "; ".join(bline['neighborhoods'])
bline = {"business_{}".format(k): x for k, x in bline.items()}
business_dict[bid] = bline
bline2 = copy.copy(bline)
bline2['business_id'] = bid
out_flat.append(bline2)
business_dict["rncjoVoEFUJGCUoC1JgnUA"]
df = pd.DataFrame.from_dict(out_flat)
df.to_csv("yelp_training_set_business.csv")
###Output
_____no_output_____
###Markdown
review
###Code
data_file = open(os.path.join(filepath, "yelp_training_set_review.json"), "rb")
data = data_file.read()
data[0:3000]
data_out_flat = []
data_out = []
blank_reviewer = {'reviewer_average_stars': 0,
'reviewer_cool': 0,
'reviewer_funny': 0,
'reviewer_name': None,
'reviewer_review_count': 0,
'reviewer_type': 'user',
'reviewer_useful': 0,
'reviewer_blank': True}
blank_business = {'business_categories': '',
'business_city': '',
'business_full_address': '',
'business_latitude': 0,
'business_longitude': 0,
'business_name': '',
'business_neighborhoods': '',
'business_open': False,
'business_review_count': 0,
'business_stars': 0,
'business_state': '',
'business_type': 'business',
'business_blank': True}
for x in data.decode("utf8").split("\n"):
if len(x) == 0:
continue
jsn = json.loads(x)
votes = jsn.pop('votes')
jsn.update(votes)
data_out.append(jsn)
jsn['reviewer_blank'] = False
jsn.update(user_dict.get(jsn['user_id'], blank_reviewer))
jsn['business_blank'] = False
jsn.update(business_dict.get(jsn['business_id'], blank_business))
data_out_flat.append(jsn)
data_out_flat[0]
df = pd.DataFrame.from_dict(data_out)
df.to_csv("yelp_training_set_review.csv")
df = pd.DataFrame.from_dict(data_out_flat)
df
df.to_csv("yelp_training_set_flattened.csv")
df[~df['business_blank']]
###Output
_____no_output_____ |
t81_558_class_14_02_auto_encode.ipynb | ###Markdown
T81-558: Applications of Deep Neural Networks**Module 14: Other Neural Network Techniques*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 14 Video Material* Part 14.1: What is AutoML [[Video]]() [[Notebook]](t81_558_class_14_01_automl.ipynb)* **Part 14.2: Using Denoising AutoEncoders in Keras** [[Video]]() [[Notebook]](t81_558_class_14_02_auto_encode.ipynb)* Part 14.3: Anomaly Detection in Keras [[Video]]() [[Notebook]](t81_558_class_14_03_anomaly.ipynb)* Part 14.4: Training an Intrusion Detection System with KDD99 [[Video]]() [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb)* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb) Part 14.2: Using Denoising AutoEncoders in Keras Function Approximation
###Code
import tensorflow as tf
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.callbacks import EarlyStopping
rng = np.random.RandomState(1)
x = np.sort((360 * rng.rand(100, 1)), axis=0)
y = np.array([np.sin(x*(np.pi/180.0)).ravel()]).T
model = Sequential()
model.add(Dense(100, input_dim=x.shape[1], activation='relu'))
model.add(Dense(50, activation='relu'))
model.add(Dense(25, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x,y,verbose=0,batch_size=len(x),epochs=25000)
pred = model.predict(x)
print("Actual")
print(y[0:5])
print("Pred")
print(pred[0:5])
chart_regression(pred.flatten(),y,sort=False)
###Output
_____no_output_____
###Markdown
Multi-Output RegressionUnlike most models, neural networks can provide multiple regression outputs. This allows a neural network to generate multiple outputs for the same input. For example, the MPG data set might be trained to predict both MPG and horsepower. One area that multiple regression outputs can be useful for is auto encoders. The following diagram shows a multi-regression neural network. As you can see, there are multiple output neurons. Usually multiple output neurons are used for classification. However, in this case it is a regression neural network.The following program uses a multi-output regression to predict both [sin](https://en.wikipedia.org/wiki/Trigonometric_functionsSine.2C_cosine_and_tangent) and [cos](https://en.wikipedia.org/wiki/Trigonometric_functionsSine.2C_cosine_and_tangent) from the same input data.
###Code
from sklearn import metrics
rng = np.random.RandomState(1)
x = np.sort((360 * rng.rand(100, 1)), axis=0)
y = np.array([np.pi * np.sin(x*(np.pi/180.0)).ravel(), np.pi * np.cos(x*(np.pi/180.0)).ravel()]).T
model = Sequential()
model.add(Dense(100, input_dim=x.shape[1], activation='relu'))
model.add(Dense(50, activation='relu'))
model.add(Dense(25, activation='relu'))
model.add(Dense(2)) # Two output neurons
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x,y,verbose=0,batch_size=len(x),epochs=25000)
# Fit regression DNN model.
pred = model.predict(x)
score = np.sqrt(metrics.mean_squared_error(pred, y))
print("Score (RMSE): {}".format(score))
np.set_printoptions(suppress=True)
print("Predicted:")
print(np.array(pred[20:25]))
print("Expected:")
print(np.array(y[20:25]))
###Output
_____no_output_____
###Markdown
Simple Auto EncoderAn auto encoder is a neural network that has the same number of input neurons as it does outputs. The hidden layers of the neural network will have fewer neurons than the input/output neurons. Because there are fewer neurons, the auto-encoder must learn to encode the input to the fewer hidden neurons. The predictors (x) and output (y) are exactly the same in an auto encoder. Because of this, auto encoders are said to be unsupervised.  The following program demonstrates a very simple auto encoder that learns to encode a sequence of numbers. Fewer hidden neurons will make it much more difficult for the auto encoder to learn.
###Code
from sklearn import metrics
import numpy as np
import pandas as pd
from IPython.display import display, HTML
import tensorflow as tf
x = np.array([range(10)]).astype(np.float32)
print(x)
model = Sequential()
model.add(Dense(3, input_dim=x.shape[1], activation='relu'))
model.add(Dense(x.shape[1])) # Multiple output neurons
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x,x,verbose=0,epochs=1000)
pred = model.predict(x)
score = np.sqrt(metrics.mean_squared_error(pred,x))
print("Fold score (RMSE): {}".format(score))
np.set_printoptions(suppress=True)
print(pred)
###Output
_____no_output_____
###Markdown
Auto Encode (single image)We are now ready to build a simple image auto encoder. The program below learns an effective encoding for the image. You can see the distortions that occur.
###Code
%matplotlib inline
from PIL import Image, ImageFile
from matplotlib.pyplot import imshow
from keras.optimizers import SGD
import requests
from io import BytesIO
url = "https://upload.wikimedia.org/wikipedia/commons/9/92/Brookings.jpg"
response = requests.get(url)
img = Image.open(BytesIO(response.content))
img.load()
img = img.resize((128,128), Image.ANTIALIAS)
img_array = np.asarray(img)
img_array = img_array.flatten()
img_array = np.array([ img_array ])
img_array = img_array.astype(np.float32)
print(img_array.shape[1])
print(img_array)
model = Sequential()
model.add(Dense(10, input_dim=img_array.shape[1], activation='relu'))
model.add(Dense(img_array.shape[1])) # Multiple output neurons
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(img_array,img_array,verbose=0,epochs=20)
print("Neural network output")
pred = model.predict(img_array)
print(pred)
print(img_array)
cols,rows = img.size
img_array2 = pred[0].reshape(rows,cols,3)
img_array2 = img_array2.astype(np.uint8)
img2 = Image.fromarray(img_array2, 'RGB')
img2
###Output
_____no_output_____
###Markdown
Standardize ImagesWhen processing several images together it is sometimes important to standardize them. The following code reads a sequence of images and causes them to all be of the same size and perfectly square. If the input images are not square, cropping will occur.
###Code
%matplotlib inline
from PIL import Image, ImageFile
from matplotlib.pyplot import imshow
import requests
import numpy as np
from io import BytesIO
from IPython.display import display, HTML
#url = "http://www.heatonresearch.com/images/about-jeff.jpg"
images = [
"https://upload.wikimedia.org/wikipedia/commons/9/92/Brookings.jpg",
"https://upload.wikimedia.org/wikipedia/commons/f/ff/WashU_Graham_Chapel.JPG",
"https://upload.wikimedia.org/wikipedia/commons/9/9e/SeigleHall.jpg",
"https://upload.wikimedia.org/wikipedia/commons/a/aa/WUSTLKnight.jpg",
"https://upload.wikimedia.org/wikipedia/commons/3/32/WashUABhall.jpg",
"https://upload.wikimedia.org/wikipedia/commons/c/c0/Brown_Hall.jpg",
"https://upload.wikimedia.org/wikipedia/commons/f/f4/South40.jpg"
]
def make_square(img):
cols,rows = img.size
if rows>cols:
pad = (rows-cols)/2
img = img.crop((pad,0,cols,cols))
else:
pad = (cols-rows)/2
img = img.crop((0,pad,rows,rows))
return img
x = []
for url in images:
ImageFile.LOAD_TRUNCATED_IMAGES = False
response = requests.get(url)
img = Image.open(BytesIO(response.content))
img.load()
img = make_square(img)
img = img.resize((128,128), Image.ANTIALIAS)
print(url)
display(img)
img_array = np.asarray(img)
img_array = img_array.flatten()
img_array = img_array.astype(np.float32)
img_array = (img_array-128)/128
x.append(img_array)
x = np.array(x)
print(x.shape)
###Output
_____no_output_____
###Markdown
Image Auto Encoder (multi-image)Auto encoders can learn the same encoding for multiple images. The following code learns a single encoding for multiple images.
###Code
%matplotlib inline
from PIL import Image, ImageFile
from matplotlib.pyplot import imshow
import requests
from io import BytesIO
from sklearn import metrics
import numpy as np
import pandas as pd
import tensorflow as tf
from IPython.display import display, HTML
# Fit regression DNN model.
print("Creating/Training neural network")
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu'))
model.add(Dense(x.shape[1])) # Multiple output neurons
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x,x,verbose=0,epochs=1000)
print("Score neural network")
pred = model.predict(x)
cols,rows = img.size
for i in range(len(pred)):
print(pred[i])
img_array2 = pred[i].reshape(rows,cols,3)
img_array2 = (img_array2*128)+128
img_array2 = img_array2.astype(np.uint8)
img2 = Image.fromarray(img_array2, 'RGB')
display(img2)
###Output
_____no_output_____
###Markdown
Adding Noise to an ImageAuto encoders can handle noise. First it is important to see how to add noise to an image. There are many ways to add such noise. The following code adds random black squares to the image to produce noise.
###Code
from PIL import Image, ImageFile
from matplotlib.pyplot import imshow
import requests
from io import BytesIO
%matplotlib inline
def add_noise(a):
a2 = a.copy()
rows = a2.shape[0]
cols = a2.shape[1]
s = int(min(rows,cols)/20) # size of spot is 1/20 of smallest dimension
for i in range(100):
x = np.random.randint(cols-s)
y = np.random.randint(rows-s)
a2[y:(y+s),x:(x+s)] = 0
return a2
url = "https://upload.wikimedia.org/wikipedia/commons/9/92/Brookings.jpg"
#url = "http://www.heatonresearch.com/images/about-jeff.jpg"
response = requests.get(url)
img = Image.open(BytesIO(response.content))
img.load()
img_array = np.asarray(img)
rows = img_array.shape[0]
cols = img_array.shape[1]
print("Rows: {}, Cols: {}".format(rows,cols))
# Create new image
img2_array = img_array.astype(np.uint8)
print(img2_array.shape)
img2_array = add_noise(img2_array)
img2 = Image.fromarray(img2_array, 'RGB')
img2
###Output
_____no_output_____
###Markdown
Denoising AutoencoderA denoising auto encoder is designed to remove noise from input signals. To do this the $y$ becomes each image/signal (just like a normal auto encoder), however, the $x$ becomes a version of $y$ with noise added. Noise is artificially added to the images to produce $x$. The following code creates 10 noisy version of each of the images. The network is trained to convert noisy data ($x$) to the origional input ($y$).
###Code
%matplotlib inline
from PIL import Image, ImageFile
from matplotlib.pyplot import imshow
import requests
import numpy as np
from io import BytesIO
from IPython.display import display, HTML
#url = "http://www.heatonresearch.com/images/about-jeff.jpg"
images = [
"https://upload.wikimedia.org/wikipedia/commons/9/92/Brookings.jpg",
"https://upload.wikimedia.org/wikipedia/commons/f/ff/WashU_Graham_Chapel.JPG",
"https://upload.wikimedia.org/wikipedia/commons/9/9e/SeigleHall.jpg",
"https://upload.wikimedia.org/wikipedia/commons/a/aa/WUSTLKnight.jpg",
"https://upload.wikimedia.org/wikipedia/commons/3/32/WashUABhall.jpg",
"https://upload.wikimedia.org/wikipedia/commons/c/c0/Brown_Hall.jpg",
"https://upload.wikimedia.org/wikipedia/commons/f/f4/South40.jpg"
]
def make_square(img):
cols,rows = img.size
if rows>cols:
pad = (rows-cols)/2
img = img.crop((pad,0,cols,cols))
else:
pad = (cols-rows)/2
img = img.crop((0,pad,rows,rows))
return img
x = []
y = []
loaded_images = []
for url in images:
ImageFile.LOAD_TRUNCATED_IMAGES = False
response = requests.get(url)
img = Image.open(BytesIO(response.content))
img.load()
img = make_square(img)
img = img.resize((128,128), Image.ANTIALIAS)
loaded_images.append(img)
print(url)
display(img)
for i in range(10):
img_array = np.asarray(img)
img_array_noise = add_noise(img_array)
img_array = img_array.flatten()
img_array = img_array.astype(np.float32)
img_array = (img_array-128)/128
img_array_noise = img_array_noise.flatten()
img_array_noise = img_array_noise.astype(np.float32)
img_array_noise = (img_array_noise-128)/128
x.append(img_array_noise)
y.append(img_array)
x = np.array(x)
y = np.array(y)
print(x.shape)
print(y.shape)
%matplotlib inline
from PIL import Image, ImageFile
from matplotlib.pyplot import imshow
import requests
from io import BytesIO
from sklearn import metrics
import numpy as np
import pandas as pd
import tensorflow as tf
from IPython.display import display, HTML
# Fit regression DNN model.
print("Creating/Training neural network")
model = Sequential()
model.add(Dense(100, input_dim=x.shape[1], activation='relu'))
model.add(Dense(50, activation='relu'))
model.add(Dense(100, activation='relu'))
model.add(Dense(x.shape[1])) # Multiple output neurons
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x,y,verbose=1,epochs=20)
print("Neural network trained")
for z in range(10):
print("*** Trial {}".format(z+1))
# Choose random image
i = np.random.randint(len(loaded_images))
img = loaded_images[i]
img_array = np.asarray(img)
cols, rows = img.size
# Add noise
img_array_noise = add_noise(img_array)
#Display noisy image
img2 = img_array_noise.astype(np.uint8)
img2 = Image.fromarray(img2, 'RGB')
print("With noise:")
display(img2)
# Present noisy image to auto encoder
img_array_noise = img_array_noise.flatten()
img_array_noise = img_array_noise.astype(np.float32)
img_array_noise = (img_array_noise-128)/128
img_array_noise = np.array([img_array_noise])
pred = model.predict(img_array_noise)[0]
# Display neural result
img_array2 = pred.reshape(rows,cols,3)
img_array2 = (img_array2*128)+128
img_array2 = img_array2.astype(np.uint8)
img2 = Image.fromarray(img_array2, 'RGB')
print("After auto encode noise removal")
display(img2)
###Output
_____no_output_____ |
FoV.ipynb | ###Markdown
Create a simple template WCS approximating that of Huntsman. Doing it manually here, but could instead just extract one from the headers of an astrometrically calibrated Huntsman Eye FITS file.
###Code
# SBIG STF-8300M resolution
n_pix_x = 3326
n_pix_y = 2504
# 2 dimensional WCS; simple RA, dec tangent plane projection (should be right to within a percent or so)
huntsman_wcs = WCS(naxis=2)
huntsman_wcs.wcs.ctype = ['RA---TAN', 'DEC--TAN']
# CCD resolution
huntsman_wcs._naxis1 = n_pix_x
huntsman_wcs._naxis2 = n_pix_y
# Make centre of CCD the WCS reference poiunt
huntsman_wcs.wcs.crpix = (n_pix_x/2, n_pix_y/2)
# Approximate x, y pixel scale for Canon 2.8/400 + STF-8300M
huntsman_wcs.wcs.cdelt = ((-2.84, 2.84) * u.arcsecond).to(u.degree)
###Output
_____no_output_____
###Markdown
Check WCS by converting to FITS header format
###Code
huntsman_wcs.to_header()
###Output
_____no_output_____
###Markdown
Random field centre, close to the SCP.
###Code
field_centre = SkyCoord("4h23m15s -85d12m31.2s")
field_centre
###Output
_____no_output_____
###Markdown
Set WCS reference coordinates to field centre coordinates
###Code
huntsman_wcs.wcs.crval = [field_centre.ra.value, field_centre.dec.value]
###Output
_____no_output_____
###Markdown
Can directly calculate the RA, dec coordinates (in degrees) of the corners of the field of view. These could be used to approximate the field of view boundaries as a quadrilateral in RA, dec space or for plotting the field of view footprint as a matplotlib 'patch'.
###Code
huntsman_wcs.calc_footprint()
###Output
_____no_output_____
###Markdown
For a rigourous check of whether an astronomical object is within the field of view or not can calculate its position on the CCD in pixel coordinates and check it's within the bounds of the CCD.For an example fetch a set of 42 (!) low redshift objects from NED
###Code
galaxies = Ned.query_region(field_centre, radius = 100 * u.arcminute)
# Filter to select only galaxies with a redshift, and only between 0 and 0.2
def redshift_filter(table, key_colnames):
if np.any(table['Redshift'] > 0) and np.any(table['Redshift'] < 0.2):
return True
else:
return False
galaxies_grouped = galaxies.group_by(('No.', 'Object Name'))
filtered_galaxies = galaxies_grouped.groups.filter(redshift_filter)
len(filtered_galaxies)
###Output
_____no_output_____
###Markdown
Now filter again based on pixel coordinates. First use WCS to convert RA, dec of galaxies to pixel coordinates on the CCD
###Code
filtered_galaxies_pixels = huntsman_wcs.all_world2pix(np.array((filtered_galaxies['RA(deg)'],
filtered_galaxies['DEC(deg)'])).T, 0)
###Output
_____no_output_____
###Markdown
Then check which pairs of pixel coordinate actually fall within the CCD.
###Code
# Filter to check if pixel coordinates are within the CCD
def pixels_in_fov(coords):
x_check = np.logical_and(coords[:,0] > 0, coords[:,0] < n_pix_x)
y_check = np.logical_and(coords[:,1] > 0, coords[:,1] < n_pix_y)
return np.logical_and(x_check, y_check)
good_galaxies = filtered_galaxies_pixels[np.where(pixels_in_fov(filtered_galaxies_pixels))]
bad_galaxies = filtered_galaxies_pixels[np.where(np.logical_not(pixels_in_fov(filtered_galaxies_pixels)))]
###Output
_____no_output_____
###Markdown
Finally produce a plot illustrating the results using the WCSAxes functionality integrated into astropy 1.3 (http://docs.astropy.org/en/stable/visualization/wcsaxes/index.html)
###Code
ax = plt.subplot(projection=huntsman_wcs)
# A dummy data array to highlight the Huntsman FoV
ax.imshow(np.ones((n_pix_y, n_pix_x)), cmap='cubehelix', vmin=0, vmax=1.1)
# Overlay coordinate grids in world coordinates (RA, dec)
ax.grid()
# Plot positions of galaxies. Plotting commands operate in pixel coordinates
# by default but can plot using RA, dec by using transform=ax.get_transform('world')
ax.scatter(good_galaxies.T[0], good_galaxies.T[1], marker='*', color='g')
ax.scatter(bad_galaxies.T[0], bad_galaxies.T[1], marker='*', color='r')
plt.xlim(-600, n_pix_x + 600)
plt.ylim(-600, n_pix_y + 600)
ax.coords[0].set_axislabel('Right Ascension')
ax.coords[0].set_major_formatter('hh')
ax.coords[1].set_axislabel('declination')
ax.coords[1].set_major_formatter('dd')
plt.title('Huntsman FoV centred on 4h23m15s -85d12m31.2s')
plt.gcf().set_size_inches(12,8)
plt.savefig('huntsman_fov.png')
###Output
_____no_output_____ |
AI/Comprehend/twitter_data_preparation.ipynb | ###Markdown
Amazon Comprehend - Classification Example Classify using Text FeaturesObjective: Train a model to identify tweets that require followup Input: Tweets Target: Binary. 0=Normal, 1=Followup AWS Twitter Labelled Tweets are available in this bucket: https://s3.console.aws.amazon.com/s3/buckets/aml-sample-data/?region=us-east-2 File: social-media/aml_training_dataset.csv
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Download Twitter training data
###Code
!aws s3 cp s3://aml-sample-data/social-media/aml_training_dataset.csv .
###Output
download: s3://aml-sample-data/social-media/aml_training_dataset.csv to ./aml_training_dataset.csv
###Markdown
Prepare Training and Test data
###Code
df = pd.read_csv('aml_training_dataset.csv')
print('Rows: {0}, Columns: {1}'.format(df.shape[0],df.shape[1]))
df.columns
df.head()
df = df[['text','trainingLabel']]
# trainingLabel contains the class
# Valid values are:
# 0 = Normal
# 1 = Followup
df.trainingLabel.value_counts()
tweet_normal = df['trainingLabel'] == 0
tweet_followup = df['trainingLabel'] == 1
# Some examples of tweets that are classified as requiring follow-up
for i in range(15):
print(df[tweet_followup]['text'].iloc[i])
print()
# Some examples of tweets that are classified as normal
for i in range(10):
print(df[tweet_normal]['text'].iloc[i])
print()
# Training, Validation and Test Split
# Comprehend service automatically splits the provided dataset into 80-20 ratio for training and validation
# We need to independently confirm quality of the model using a test set.
# So, let's reserve 10% of the data for test and provide the remaining 90% to Comprehend service
# Training & Validation = 90% of the data
# Test = 10% of the data
# Randomize the datset
np.random.seed(5)
l = list(df.index)
np.random.shuffle(l)
df = df.iloc[l]
rows = df.shape[0]
train = int(.9 * rows)
test = rows - train
rows, train, test
df_train = df[:train]
df_test = df[train:]
df_train.trainingLabel.value_counts()
df_test.trainingLabel.value_counts()
df_train.columns
df_train.to_csv('twitter_train.csv',
index=False,
header=False,
columns=['trainingLabel','text'])
df_test.to_csv('twitter_test_with_label.csv',
index=False,
header=False,
columns=['trainingLabel','text'])
df_test.to_csv('twitter_test_without_label.csv',
index=False,
header=False,
columns=['text'])
###Output
_____no_output_____
###Markdown
Upload to S3 Specify your bucket name. Replace 'chandra-ml-sagemaker' with your bucket
###Code
!aws s3 cp twitter_train.csv s3://chandra-ml-sagemaker/twitter/train/twitter_train.csv
!aws s3 cp twitter_test_without_label.csv s3://chandra-ml-sagemaker/twitter/test/twitter_test_without_label.csv
###Output
Completed 111.0 KiB/111.0 KiB (1.4 MiB/s) with 1 file(s) remaining
upload: ./twitter_test_without_label.csv to s3://chandra-ml-sagemaker/twitter/test/twitter_test_without_label.csv
|
Notebooks/02_analyses/Fig3_Genes.ipynb | ###Markdown
Sparsity measures of the genes* First, we select the top n (n defined by the user) genes for each feature* Combine them to make a smaller dataset* Run the same analyses of Figure 2 (provided in separate notebooks) on them
###Code
import numpy as np
import pandas as pd
'''Making a DataFrame of gene weights and genes information'''
# Loading the weights array
weights = np.load('files/SFT_100weights.npy')
n_features = weights.shape[0]
# Loading the genes dataframe
genes_df = pd.read_csv('files/genes_list.csv')
# Inputting the weights to this dataframe:
for i in range(n_features):
genes_df['feature_' + str(i)] = abs(weights[i, :])
'''Selecting top n genes of each feature'''
# Selecting top n genes for each feature
n_top = 12
all_top_list = []
for i in range(n_features):
# Sorting by the weights in each column
sorted_df = genes_df.sort_values(by=['feature_'+str(i)], ascending=False, na_position='last')
sorted_df.reset_index(drop=True, inplace=True)
# Getting the top n list and adding to the main list
gene_list = sorted_df['Gene'].loc[:n_top-1].tolist()
all_top_list.extend(gene_list)
# Limiting the dataframe to desired genes
select_df = genes_df[genes_df['Gene'].isin(all_top_list)]
indices = select_df.index.tolist()
# Loading the voxel * gene matrix and filtering it for top genes
X = np.load('gene_expression_array.npy')
X_select = X[:,indices]
###Output
_____no_output_____ |
Notebooks/*augmented_shufflenet_v2_x1_5.ipynb | ###Markdown
Code for the paper: Efficient and Mobile Deep Learning Architectures for Fast Identification of BacterialStrains in Resource-Constrained Devices Architecture: ShuffleNet v2x1.5 Data: Augmented + Cross-Validation Loading libraries
###Code
# Imports here
import os
import torch
import numpy as np
import torch.nn.functional as F
import pandas as pd
import time
import numpy as np
from PIL import Image
from sklearn.metrics import f1_score, precision_score, recall_score, classification_report
from sklearn.model_selection import KFold
from torch import nn
from torch import optim
from torch.utils.data import SubsetRandomSampler
from torchvision import datasets, transforms, models
from pytorch_model_summary import summary
# Archs not in Pytorch
from efficientnet_pytorch import EfficientNet
# External functions
from scripts.utils import *
print(torch.__version__)
import sys
print(sys.version)
###Output
1.5.1
3.8.5 (default, Jul 28 2020, 12:59:40)
[GCC 9.3.0]
###Markdown
Data paths and hyperparameters Hyperparameters and dataset details.
###Code
# Dataset details
dataset_version = 'original' # original or augmented
img_shape = (224,224)
img_size = str(img_shape[0])+"x"+str(img_shape[1])
# Root directory of dataset
data_dir = '/home/yibbtstll/venvs/pytorch_gpu/CySDeepBacterial/Dataset/DIBaS_augmented/'
train_batch_size = 64
val_test_batch_size = 32
feature_extract = False
pretrained = False
h_epochs = 15
kfolds = 10
###Output
_____no_output_____
###Markdown
Data preparation and loading Defining transforms and creating dataloaders
###Code
# Define transforms for input data
training_transforms = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
# TODO: Load the datasets with ImageFolder
total_set = datasets.ImageFolder(data_dir, transform=training_transforms)
# Defining folds
splits = KFold(n_splits = kfolds, shuffle = True, random_state = 42)
###Output
_____no_output_____
###Markdown
Visualizing the target classes in dataset
###Code
train_labels = {value : key for (key, value) in total_set.class_to_idx.items()}
print(len(train_labels))
print(train_labels)
###Output
32
{0: 'Acinetobacter.baumanii', 1: 'Actinomyces.israeli', 2: 'Bacteroides.fragilis', 3: 'Bifidobacterium.spp', 4: 'Clostridium.perfringens', 5: 'Enterococcus.faecalis', 6: 'Enterococcus.faecium', 7: 'Escherichia.coli', 8: 'Fusobacterium', 9: 'Lactobacillus.casei', 10: 'Lactobacillus.crispatus', 11: 'Lactobacillus.delbrueckii', 12: 'Lactobacillus.gasseri', 13: 'Lactobacillus.jehnsenii', 14: 'Lactobacillus.johnsonii', 15: 'Lactobacillus.paracasei', 16: 'Lactobacillus.plantarum', 17: 'Lactobacillus.reuteri', 18: 'Lactobacillus.rhamnosus', 19: 'Lactobacillus.salivarius', 20: 'Listeria.monocytogenes', 21: 'Micrococcus.spp', 22: 'Neisseria.gonorrhoeae', 23: 'Porfyromonas.gingivalis', 24: 'Propionibacterium.acnes', 25: 'Proteus', 26: 'Pseudomonas.aeruginosa', 27: 'Staphylococcus.aureus', 28: 'Staphylococcus.epidermidis', 29: 'Staphylococcus.saprophiticus', 30: 'Streptococcus.agalactiae', 31: 'Veionella'}
###Markdown
Model definition and inicialization Freezing pre-trained parameters, finetunning the classifier to output 32 classes.
###Code
# Freeze pretrained model parameters to avoid backpropogating through them
def set_parameter_requires_grad(model, feature_extracting):
if feature_extracting:
print("Setting grad to false.")
for param in model.parameters():
param.requires_grad = False
return model
def get_device():
# Model and criterion to GPU
if torch.cuda.is_available():
return 'cuda'
else:
return 'cpu'
models.shufflenet_v2_x1_0(pretrained=False)
def load_model():
# Transfer Learning
model = models.shufflenet_v2_x1_5(pretrained=pretrained)
# Mode
model = set_parameter_requires_grad(model, feature_extract)
# Fine tuning
# Build custom classifier
model.fc = nn.Linear(in_features=1024,
out_features=32)
return model.to(get_device())
def create_optimizer(model):
# Parameters to update
params_to_update = model.parameters()
if feature_extract:
params_to_update = []
for param in model.parameters():
if param.requires_grad == True:
params_to_update.append(param)
else:
n_params = 0
for param in model.parameters():
if param.requires_grad == True:
n_params += 1
# Loss function and gradient descent
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(params_to_update,
lr=0.0001,
weight_decay=0.00004)
print("Running on device:", get_device())
return criterion.to(get_device()), model, optimizer
###Output
_____no_output_____
###Markdown
Training, validation and test functions
###Code
# Variables to store fold scores
train_acc = []
test_top1_acc = []
test_top5_acc = []
test_precision = []
test_recall = []
test_f1 = []
times = []
for fold, (train_idx, valid_idx) in enumerate(splits.split(total_set)):
start_time = time.time()
print('Fold : {}'.format(fold))
# Train and val samplers
train_sampler = SubsetRandomSampler(train_idx)
print("Samples in training:", len(train_sampler))
valid_sampler = SubsetRandomSampler(valid_idx)
print("Samples in test:", len(valid_sampler))
# Train and val loaders
train_loader = torch.utils.data.DataLoader(
total_set, batch_size=train_batch_size, sampler=train_sampler)
valid_loader = torch.utils.data.DataLoader(
total_set, batch_size=1, sampler=valid_sampler)
device = get_device()
criterion, model, optimizer = create_optimizer(load_model())
# Training
for epoch in range(h_epochs):
model.train()
running_loss = 0.0
running_corrects = 0
trunning_corrects = 0
for inputs, labels in train_loader:
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
with torch.set_grad_enabled(True):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item() * inputs.size(0)
running_corrects += (preds == labels).sum()
trunning_corrects += preds.size(0)
epoch_loss = running_loss / trunning_corrects
epoch_acc = (running_corrects.double()*100) / trunning_corrects
train_acc.append(epoch_acc.item())
print('\t\t Training: Epoch({}) - Loss: {:.4f}, Acc: {:.4f}'.format(epoch, epoch_loss, epoch_acc))
# Validation
model.eval()
vrunning_loss = 0.0
vrunning_corrects = 0
num_samples = 0
for data, labels in valid_loader:
data = data.to(device)
labels = labels.to(device)
optimizer.zero_grad()
with torch.no_grad():
outputs = model(data)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
vrunning_loss += loss.item() * data.size(0)
vrunning_corrects += (preds == labels).sum()
num_samples += preds.size(0)
vepoch_loss = vrunning_loss/num_samples
vepoch_acc = (vrunning_corrects.double() * 100)/num_samples
print('\t\t Validation({}) - Loss: {:.4f}, Acc: {:.4f}'.format(epoch, vepoch_loss, vepoch_acc))
# Calculating and appending scores to this fold
model.class_to_idx = total_set.class_to_idx
scores = get_scores(model, valid_loader)
test_top1_acc.append(scores[0])
test_top5_acc.append(scores[1])
test_precision.append(scores[2])
test_recall.append(scores[3])
test_f1.append(scores[4])
time_fold = time.time() - start_time
times.append(time_fold)
print("Total time per fold: %s seconds." %(time_fold))
print("Train accuracy average: ", np.mean(train_acc) / 100)
print("Top-1 test accuracy average: ", np.mean(test_top1_acc))
print("Top-5 test accuracy average: ", np.mean(test_top5_acc))
print("Weighted Precision test accuracy average: ", np.mean(test_precision))
print("Weighted Recall test accuracy average: ", np.mean(test_recall))
print("Weighted F1 test accuracy average: ", np.mean(test_f1))
print("Average time per fold (seconds):", np.mean(times))
###Output
Train accuracy average: 0.8426502062052831
Top-1 test accuracy average: 0.9001793460932745
Top-5 test accuracy average: 0.9968014456727126
Weighted Precision test accuracy average: 0.9073827126091827
Weighted Recall test accuracy average: 0.9001793460932745
Weighted F1 test accuracy average: 0.9000450217544824
Average time per fold (seconds): 1283.9797766447068
|
machine-learning-notebooks/01-convert-model-containerize.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Convert Model and Containerize* Create Workspace* Register Demo model* Build Image * Create Hub and Prepare for device and deploy* Deploy Model 
###Code
# For prod
!source activate py36 && pip install azureml-core azureml-contrib-iot azure-mgmt-containerregistry azure-cli
!source activate py36 && az extension add --name azure-cli-iot-ext
import os
# Ensure you are running from the correct environment
import sys
sys.executable
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
from azureml.core import Workspace
###Output
_____no_output_____
###Markdown
Create a Workspace Change this cell from markdown to code and run this if you need to create a workspace ws=Workspace.create(subscription_id="your-subscription-ID-goes-here", resource_group="your-resource-group-goes-here", name="your-ML-workspace-name-goes-here", location="location-of-your-ML-workspace") ws.write_config()
###Code
#Initialize Workspace
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
ParametersEnter your parameters for the next automated steps:- Creation of a workspace, - Creation of IoT Hub, - Device registration, - Demo model registration,- Creation of Model Image for Azure IoT Edge Module deployment - Device Deployment setupAfter this step you will need to copy your connection string and provide it to your device
###Code
# Parameter list
# Pick a name for what you want to call the module you deploy to the camera
module_name = "visionsample"
# Resource group in Azure
resource_group_name= ws.resource_group
iot_rg="vaidk_"+resource_group_name
# Azure region where your services will be provisioned
iot_location="eastus2"
# Azure IoT Hub name
iot_hub_name="iothub-"+ ws.get_details()["name"]
# Pick a name for your camera
iot_device_id="vadik_"+ ws.get_details()["name"]
# Pick a name for the deployment configuration
iot_deployment_id="demovaidk"
###Output
_____no_output_____
###Markdown
MobileNet ImageNet model This step uses the trained model from your local folder in the Notebooks shell.There are three files (i) the model_name.pb file, (ii) the lables_names.txt and (iii) va-snpe-engine-library_config.json, in this folder. This va-snpe-engine-library_config file is used by the camera when loading the model into the inference engine. key fields are Engine: This is the network used by the model 0: MobileNet 1: MobileNet-SSD 2: SqueezeNetNetworkIO: 0: CPU (default) 1: DSPRuntime: this is the HW option to use for inferencing 0: CPU 1: DSP 2: GPUConfThreshold: This is the threshold for when the bounding boxes are shown or inferencing results are shown on screen. 
###Code
from azureml.core.model import Model
model = Model.register(model_path = "models/mobilenet-imagenet/",
model_name = "imagenet_2_frozen.pb",
tags = {'Device': "peabody", 'type': "mobilenet", 'area': "iot", 'version': "1.0"},
description = "TF SNPE quantization friendly MobileNet",
workspace = ws)
print(model.name, model.url, model.version, model.id, model.created_time)
print(model.name, model.url, model.version, model.id, model.created_time)
# You can download the model to see what was registered
# model.download()
###Output
_____no_output_____
###Markdown
Convert Model to run on the Vision AI Dev Kit
###Code
from azureml.contrib.iot.model_converters import SnpeConverter
# submit a compile request
compile_request = SnpeConverter.convert_tf_model(
ws,
source_model=model,
input_node="input",
input_dims="1,224,224,3",
outputs_nodes = ["MobilenetV1/Predictions/Reshape_1"],
allow_unconsumed_nodes = True)
print(compile_request._operation_id)
# Wait for the request to complete
compile_request.wait_for_completion(show_output=True, timeout=60)
# Get the converted model
converted_model = compile_request.result
print(converted_model.name, converted_model.url, converted_model.version, converted_model.id, converted_model.created_time)
# You can downlaod the model to see what the conversion result was
# converted_model.download()
###Output
_____no_output_____
###Markdown
Build the container image for IoT to deploy to the Vision AI Dev Kit
###Code
# NEW version of main.py
from azureml.core.image import Image
from azureml.contrib.iot import IotContainerImage
print ('We will create an image for you now ...')
image_config = IotContainerImage.image_configuration(
architecture="arm32v7",
execution_script="main.py",
dependencies=["camera.py","iot.py","ipcprovider.py","utility.py", "frame_iterators.py"],
docker_file="Dockerfile",
tags = ["mobilenet"],
description = "Updated MobileNet trained on ImageNet")
image = Image.create(name = "mobilenetimagenet",
# this is the model object
models = [converted_model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
# Getting your container details
container_reg = ws.get_details()["containerRegistry"]
reg_name=container_reg.split("/")[-1]
container_url = "\"" + image.image_location + "\","
subscription_id = ws.subscription_id
print('{}'.format(image.image_location))
print('{}'.format(reg_name))
print('{}'.format(subscription_id))
from azure.mgmt.containerregistry import ContainerRegistryManagementClient
from azure.mgmt import containerregistry
client = ContainerRegistryManagementClient(ws._auth,subscription_id)
result= client.registries.list_credentials(resource_group_name, reg_name, custom_headers=None, raw=False)
username = result.username
password = result.passwords[0].value
file = open('./deployment-template.json')
contents = file.read()
contents = contents.replace('__MODULE_NAME', module_name)
contents = contents.replace('__REGISTRY_NAME', reg_name)
contents = contents.replace('__REGISTRY_USER_NAME', username)
contents = contents.replace('__REGISTRY_PASSWORD', password)
contents = contents.replace('__REGISTRY_IMAGE_LOCATION', image.image_location)
with open('./deployment.json', 'wt', encoding='utf-8') as output_file:
output_file.write(contents)
print ( 'We will create your HUB now')
with open ('setsub','w+') as command1:
command1.write('az account set --subscription'+subscription_id )
!
sh setsub
with open ('create','w+') as command2:
regcommand="\n echo Installing Extension ... \naz extension add --name azure-cli-iot-ext \n"+ "\n echo CREATING RG "+iot_rg+"... \naz group create --name "+ iot_rg +" --location "+ iot_location+ "\n" +"\n echo CREATING HUB "+iot_hub_name+"... \naz iot hub create --name "+ iot_hub_name + " --resource-group "+ iot_rg +" --sku S1"
command2.write(regcommand +"\n echo CREATING DEVICE ID "+iot_device_id+"... \n az iot hub device-identity create --device-id "+ iot_device_id + " --hub-name " + iot_hub_name +" --edge-enabled")
!sh create
with open ('deploy','w+')as command3:
createcommand="\n echo DEPLOYING "+iot_deployment_id+" ... \naz iot edge deployment create --deployment-id \"" + iot_deployment_id + "\" --content \"deployment.json\" --hub-name \"" + iot_hub_name +"\" --target-condition \"deviceId='"+iot_device_id+"'\" --priority 1"
command3.write(createcommand)
!sh deploy
with open ('details','w+')as command4:
get_string="\n echo THIS IS YOUR CONNECTION STRING ... \naz iot hub device-identity show-connection-string --device-id \"" + iot_device_id + "\" --hub-name \"" + iot_hub_name+"\""
command4.write(get_string)
print("COPY THIS CONNECTION STRING FOR YOUR DEVICE")
!sh details
###Output
_____no_output_____
###Markdown
Deploy image as an IoT module  Set subscription to the same as your workspace
###Code
%%writefile ./setsub
az account set --subscription
iot_sub=ws.subscription_id
%store iot_sub >> setsub
!sh setsub
print ('{}'.format(iot_sub))
###Output
_____no_output_____
###Markdown
Provision a new Azure IoT Hub
###Code
#RG and location to create hub
iot_rg="vaidk_"+resource_group_name
iot_location=ws.get_details()["location"]
#temp to delete
iot_location="eastus2"
iot_hub_name="iothub-"+ ws.get_details()["name"]
iot_device_id="vadik_"+ ws.get_details()["name"]
iot_deployment_id="demovaidk"
print('{}'.format(iot_hub_name))
%%writefile ./create
#Command to create hub and device
# Adding Intialization steps
regcommand="\n echo Installing Extension ... \naz extension add --name azure-cli-iot-ext \n"+ "\n echo CREATING RG "+iot_rg+"... \naz group create --name "+ iot_rg +" --location "+ iot_location+ "\n" +"\n echo CREATING HUB "+iot_hub_name+"... \naz iot hub create --name "+ iot_hub_name + " --resource-group "+ iot_rg +" --sku S1"
#print('{}'.format(regcommand))
%store regcommand >> create
###Output
_____no_output_____
###Markdown
Create Identity for your device
###Code
#Adding Device ID
create_device="\n echo CREATING DEVICE ID "+iot_device_id+"... \n az iot hub device-identity create --device-id "+ iot_device_id + " --hub-name " + iot_hub_name +" --edge-enabled"
#print('{}'.format(create_device))
%store create_device >> create
#Create command and vonfigure device
!sh create
###Output
_____no_output_____
###Markdown
Create Deployment
###Code
%%writefile ./deploy
#Command to create hub and device
#Add deployment command
deploy_device="\n echo DEPLOYING "+iot_deployment_id+" ... \naz iot edge deployment create --deployment-id \"" + iot_deployment_id + "\" --content \"deployment.json\" --hub-name \"" + iot_hub_name +"\" --target-condition \"deviceId='"+iot_device_id+"'\" --priority 1"
#print('{}'.format(deploy_device))
%store deploy_device >> deploy
#run deployment to stage all work for when the model is ready
!sh deploy
###Output
_____no_output_____
###Markdown
Use this conenction string on your camera to Initialize it
###Code
%%writefile ./showdetails
#Command to create hub and device
#Add deployment command
get_string="\n echo THIS IS YOUR CONNECTION STRING ... \naz iot hub device-identity show-connection-string --device-id \"" + iot_device_id + "\" --hub-name \"" + iot_hub_name+"\""
#print('{}'.format(get_string))
%store get_string >> showdetails
!sh showdetails
!az account set --subscription 5f08d643-1910-4a38-a7c7-84a39d4f42e0
!az iot hub show --name hub-peabody
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Convert Model and Containerize* Create Workspace* Register Demo model* Build Image * Create Hub and Prepare for device and deploy* Deploy Model 
###Code
# For prod
!pip install azureml-core azureml-contrib-iot azure-mgmt-containerregistry azure-cli
!az extension add --name azure-cli-iot-ext
import os
# Ensure you are running from the correct environment
import sys
sys.executable
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
from azureml.core import Workspace
###Output
_____no_output_____
###Markdown
Create a Workspace Change this cell from markdown to code and run this if you need to create a workspace ws=Workspace.create(subscription_id="your-subscription-ID-goes-here", resource_group="your-resource-group-goes-here", name="your-ML-workspace-name-goes-here", location="location-of-your-ML-workspace") ws.write_config()
###Code
#Initialize Workspace
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
ParametersEnter your parameters for the next automated steps:- Creation of a workspace, - Creation of IoT Hub, - Device registration, - Demo model registration,- Creation of Model Image for Azure IoT Edge Module deployment - Device Deployment setupAfter this step you will need to copy your connection string and provide it to your device
###Code
# Parameter list
# Pick a name for what you want to call the module you deploy to the camera
module_name = "visionsample"
# Resource group in Azure
resource_group_name= ws.resource_group
iot_rg="vaidk_"+resource_group_name
# Azure region where your services will be provisioned
iot_location="eastus2"
# Azure IoT Hub name
iot_hub_name="iothub-"+ ws.get_details()["name"]
# Pick a name for your camera
iot_device_id="vadik_"+ ws.get_details()["name"]
# Pick a name for the deployment configuration
iot_deployment_id="demovaidk"
###Output
_____no_output_____
###Markdown
MobileNet ImageNet model This step uses the trained model from your local folder in the Notebooks shell.There are three files (i) the model_name.pb file, (ii) the lables_names.txt and (iii) va-snpe-engine-library_config.json, in this folder. This va-snpe-engine-library_config file is used by the camera when loading the model into the inference engine. key fields are Engine: This is the network used by the model 0: MobileNet 1: MobileNet-SSD 2: SqueezeNetNetworkIO: 0: CPU (default) 1: DSPRuntime: this is the HW option to use for inferencing 0: CPU 1: DSP 2: GPUConfThreshold: This is the threshold for when the bounding boxes are shown or inferencing results are shown on screen. 
###Code
from azureml.core.model import Model
model = Model.register(model_path = "models/mobilenet-imagenet/",
model_name = "imagenet_2_frozen.pb",
tags = {'Device': "peabody", 'type': "mobilenet", 'area': "iot", 'version': "1.0"},
description = "TF SNPE quantization friendly MobileNet",
workspace = ws)
print(model.name, model.url, model.version, model.id, model.created_time)
print(model.name, model.url, model.version, model.id, model.created_time)
# You can download the model to see what was registered
# model.download()
###Output
_____no_output_____
###Markdown
Convert Model to run on the Vision AI Dev Kit
###Code
from azureml.contrib.iot.model_converters import SnpeConverter
# submit a compile request
compile_request = SnpeConverter.convert_tf_model(
ws,
source_model=model,
input_node="input",
input_dims="1,224,224,3",
outputs_nodes = ["MobilenetV1/Predictions/Reshape_1"],
allow_unconsumed_nodes = True)
print(compile_request._operation_id)
# Wait for the request to complete
compile_request.wait_for_completion(show_output=True, timeout=60)
# Get the converted model
converted_model = compile_request.result
print(converted_model.name, converted_model.url, converted_model.version, converted_model.id, converted_model.created_time)
# You can downlaod the model to see what the conversion result was
# converted_model.download()
###Output
_____no_output_____
###Markdown
Build the container image for IoT to deploy to the Vision AI Dev Kit
###Code
# NEW version of main.py
from azureml.core.image import Image
from azureml.contrib.iot import IotContainerImage
print ('We will create an image for you now ...')
image_config = IotContainerImage.image_configuration(
architecture="arm32v7",
execution_script="main.py",
dependencies=["camera.py","iot.py","ipcprovider.py","utility.py", "frame_iterators.py"],
docker_file="Dockerfile",
tags = ["mobilenet"],
description = "Updated MobileNet trained on ImageNet")
image = Image.create(name = "mobilenetimagenet",
# this is the model object
models = [converted_model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
# Getting your container details
container_reg = ws.get_details()["containerRegistry"]
reg_name=container_reg.split("/")[-1]
container_url = "\"" + image.image_location + "\","
subscription_id = ws.subscription_id
print('{}'.format(image.image_location))
print('{}'.format(reg_name))
print('{}'.format(subscription_id))
from azure.mgmt.containerregistry import ContainerRegistryManagementClient
from azure.mgmt import containerregistry
client = ContainerRegistryManagementClient(ws._auth,subscription_id)
result= client.registries.list_credentials(resource_group_name, reg_name, custom_headers=None, raw=False)
username = result.username
password = result.passwords[0].value
file = open('./deployment-template.json')
contents = file.read()
contents = contents.replace('__MODULE_NAME', module_name)
contents = contents.replace('__REGISTRY_NAME', reg_name)
contents = contents.replace('__REGISTRY_USER_NAME', username)
contents = contents.replace('__REGISTRY_PASSWORD', password)
contents = contents.replace('__REGISTRY_IMAGE_LOCATION', image.image_location)
with open('./deployment.json', 'wt', encoding='utf-8') as output_file:
output_file.write(contents)
print ( 'We will create your HUB now')
with open ('setsub','w+') as command1:
command1.write('az account set --subscription'+subscription_id )
!
sh setsub
with open ('create','w+') as command2:
regcommand="\n echo Installing Extension ... \naz extension add --name azure-cli-iot-ext \n"+ "\n echo CREATING RG "+iot_rg+"... \naz group create --name "+ iot_rg +" --location "+ iot_location+ "\n" +"\n echo CREATING HUB "+iot_hub_name+"... \naz iot hub create --name "+ iot_hub_name + " --resource-group "+ iot_rg +" --sku S1"
command2.write(regcommand +"\n echo CREATING DEVICE ID "+iot_device_id+"... \n az iot hub device-identity create --device-id "+ iot_device_id + " --hub-name " + iot_hub_name +" --edge-enabled")
!sh create
with open ('deploy','w+')as command3:
createcommand="\n echo DEPLOYING "+iot_deployment_id+" ... \naz iot edge deployment create --deployment-id \"" + iot_deployment_id + "\" --content \"deployment.json\" --hub-name \"" + iot_hub_name +"\" --target-condition \"deviceId='"+iot_device_id+"'\" --priority 1"
command3.write(createcommand)
!sh deploy
with open ('details','w+')as command4:
get_string="\n echo THIS IS YOUR CONNECTION STRING ... \naz iot hub device-identity show-connection-string --device-id \"" + iot_device_id + "\" --hub-name \"" + iot_hub_name+"\""
command4.write(get_string)
print("COPY THIS CONNECTION STRING FOR YOUR DEVICE")
!sh details
###Output
_____no_output_____
###Markdown
Deploy image as an IoT module  Set subscription to the same as your workspace
###Code
%%writefile ./setsub
az account set --subscription
iot_sub=ws.subscription_id
%store iot_sub >> setsub
!sh setsub
print ('{}'.format(iot_sub))
###Output
_____no_output_____
###Markdown
Provision a new Azure IoT Hub
###Code
#RG and location to create hub
iot_rg="vaidk_"+resource_group_name
iot_location=ws.get_details()["location"]
#temp to delete
iot_location="eastus2"
iot_hub_name="iothub-"+ ws.get_details()["name"]
iot_device_id="vadik_"+ ws.get_details()["name"]
iot_deployment_id="demovaidk"
print('{}'.format(iot_hub_name))
%%writefile ./create
#Command to create hub and device
# Adding Intialization steps
regcommand="\n echo Installing Extension ... \naz extension add --name azure-cli-iot-ext \n"+ "\n echo CREATING RG "+iot_rg+"... \naz group create --name "+ iot_rg +" --location "+ iot_location+ "\n" +"\n echo CREATING HUB "+iot_hub_name+"... \naz iot hub create --name "+ iot_hub_name + " --resource-group "+ iot_rg +" --sku S1"
#print('{}'.format(regcommand))
%store regcommand >> create
###Output
_____no_output_____
###Markdown
Create Identity for your device
###Code
#Adding Device ID
create_device="\n echo CREATING DEVICE ID "+iot_device_id+"... \n az iot hub device-identity create --device-id "+ iot_device_id + " --hub-name " + iot_hub_name +" --edge-enabled"
#print('{}'.format(create_device))
%store create_device >> create
#Create command and configure device
!sh create
###Output
_____no_output_____
###Markdown
Create Deployment
###Code
%%writefile ./deploy
#Command to create hub and device
#Add deployment command
deploy_device="\n echo DEPLOYING "+iot_deployment_id+" ... \naz iot edge deployment create --deployment-id \"" + iot_deployment_id + "\" --content \"deployment.json\" --hub-name \"" + iot_hub_name +"\" --target-condition \"deviceId='"+iot_device_id+"'\" --priority 1"
#print('{}'.format(deploy_device))
%store deploy_device >> deploy
#run deployment to stage all work for when the model is ready
!sh deploy
###Output
_____no_output_____
###Markdown
Use this conenction string on your camera to Initialize it
###Code
%%writefile ./showdetails
#Command to create hub and device
#Add deployment command
get_string="\n echo THIS IS YOUR CONNECTION STRING ... \naz iot hub device-identity show-connection-string --device-id \"" + iot_device_id + "\" --hub-name \"" + iot_hub_name+"\""
#print('{}'.format(get_string))
%store get_string >> showdetails
!sh showdetails
!az account set --subscription 5f08d643-1910-4a38-a7c7-84a39d4f42e0
!az iot hub show --name hub-peabody
###Output
_____no_output_____ |
codeup0821.ipynb | ###Markdown
codeup 4564 : https://codeup.kr/problem.php?id=4564계단오르기
###Code
%%writefile 4564.cpp
#include<stdio.h>
int stair[301] = { 0, };
int memo[301] = { 0, };
int n;
int r(int a, int b)
{
if (a == n)
return stair[n];
if (a > n || b > 2)
return -3000000;
if (b > 1)
return stair[a] + r(a + 2, 1);
if (memo[a])
return memo[a];
int i, j;
i = stair[a] + r(a + 1, ++b);
j = stair[a] + r(a + 2, 1);
if (i < j)
return memo[a] = j;
else
return memo[a] = i;
}
int main()
{
int i;
for (i = scanf("%d", &n);i <= n;i++)
{
scanf("%d", &stair[i]);
}
printf("%d",r(0, 0));
return 0;
}
###Output
Writing 4564.cpp
###Markdown
code up : 1930 supersum
###Code
%%writefile 1930.cpp
#include<stdio.h>
int supersum(int k, int n)
{
if (k == 0)
return n;
else {
return r(k - 1, n);
}
}
int r(int k, int n)
{
if (!n)
return 0;
else {
return r(k, n - 1) + supersum(k, n);
}
}
int main()
{
int k, n;
while (scanf("%d %d", &k, &n) != EOF)
printf("%d\n", supersum(k, n));
}
###Output
_____no_output_____ |
jupyter/SparkOCRGreyBackground.ipynb | ###Markdown
OCR example for recogneze text from table with grey bacground for some cells Install spark-ocr python packgeNeed specify path to `spark-ocr-assembly-[version].jar` or `secret`
###Code
secret = ""
license = ""
version = secret.split("-")[0]
spark_ocr_jar_path = "../../target/scala-2.11"
%%bash
if python -c 'import google.colab' &> /dev/null; then
echo "Run on Google Colab!"
echo "Install Open JDK"
apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64"
export PATH="$JAVA_HOME/bin:$PATH"
java -version
fi
import sys
import os
if 'google.colab' in sys.modules:
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
# install from PYPI using secret
%pip install spark-ocr==$version --user --extra-index-url=https://pypi.johnsnowlabs.com/$secret --upgrade
# or install from local path
# %pip install --user ../../python/dist/spark-ocr-1.4.0rc1.tar.gz
###Output
Processing /Users/nmelnik/IdeaProjects/spark-ocr/python/dist/spark-ocr-1.4.0rc1.tar.gz
Requirement already satisfied: numpy==1.17.4 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from spark-ocr==1.4.0rc1) (1.17.4)
Requirement already satisfied: pillow==6.2.1 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from spark-ocr==1.4.0rc1) (6.2.1)
Requirement already satisfied: py4j==0.10.7 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from spark-ocr==1.4.0rc1) (0.10.7)
Requirement already satisfied: pyspark==2.4.4 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from spark-ocr==1.4.0rc1) (2.4.4)
Requirement already satisfied: python-levenshtein==0.12.0 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from spark-ocr==1.4.0rc1) (0.12.0)
Requirement already satisfied: scikit-image==0.16.2 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from spark-ocr==1.4.0rc1) (0.16.2)
Requirement already satisfied: implicits==1.0.2 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from spark-ocr==1.4.0rc1) (1.0.2)
Requirement already satisfied: setuptools in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from python-levenshtein==0.12.0->spark-ocr==1.4.0rc1) (46.0.0)
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from scikit-image==0.16.2->spark-ocr==1.4.0rc1) (3.2.0)
Requirement already satisfied: networkx>=2.0 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from scikit-image==0.16.2->spark-ocr==1.4.0rc1) (2.4)
Requirement already satisfied: PyWavelets>=0.4.0 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from scikit-image==0.16.2->spark-ocr==1.4.0rc1) (1.1.1)
Requirement already satisfied: scipy>=0.19.0 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from scikit-image==0.16.2->spark-ocr==1.4.0rc1) (1.4.1)
Requirement already satisfied: imageio>=2.3.0 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from scikit-image==0.16.2->spark-ocr==1.4.0rc1) (2.8.0)
Requirement already satisfied: cycler>=0.10 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2->spark-ocr==1.4.0rc1) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2->spark-ocr==1.4.0rc1) (2.4.6)
Requirement already satisfied: python-dateutil>=2.1 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2->spark-ocr==1.4.0rc1) (2.8.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2->spark-ocr==1.4.0rc1) (1.1.0)
Requirement already satisfied: decorator>=4.3.0 in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from networkx>=2.0->scikit-image==0.16.2->spark-ocr==1.4.0rc1) (4.4.2)
Requirement already satisfied: six in /Users/nmelnik/Library/Python/3.7/lib/python/site-packages (from cycler>=0.10->matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2->spark-ocr==1.4.0rc1) (1.14.0)
Building wheels for collected packages: spark-ocr
Building wheel for spark-ocr (setup.py) ... [?25ldone
[?25h Created wheel for spark-ocr: filename=spark_ocr-1.4.0rc1-cp37-none-any.whl size=5750346 sha256=7b04d0c07ab43a1eb7c9ef59382922918115024c603cb96a84cf8feaa524bd07
Stored in directory: /Users/nmelnik/Library/Caches/pip/wheels/27/e6/3a/94eb4767acaee05cd2729c575d65a8d1ec636533171c941d2e
Successfully built spark-ocr
Installing collected packages: spark-ocr
Found existing installation: spark-ocr 1.4.0rc1
Uninstalling spark-ocr-1.4.0rc1:
Successfully uninstalled spark-ocr-1.4.0rc1
Successfully installed spark-ocr-1.4.0rc1
[33mWARNING: You are using pip version 19.3.1; however, version 20.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
Note: you may need to restart the kernel to use updated packages.
###Markdown
Initialization of spark sessionNeed specify path to `spark-ocr-assembly-[version].jar` or `secret`
###Code
from pyspark.sql import SparkSession
from sparkocr import start
if license:
os.environ['JSL_OCR_LICENSE'] = license
spark = start(secret=secret, jar_path=spark_ocr_jar_path)
spark
###Output
SparkConf Configured, Starting to listen on port: 51399
JAR PATH:/usr/local/lib/python3.7/site-packages/sparkmonitor/listener.jar
###Markdown
Imports
###Code
from pyspark.ml import PipelineModel
from pyspark.sql import functions as F
from sparkocr.transformers import *
from sparkocr.enums import *
from sparkocr.utils import display_image
from sparkocr.metrics import score
###Output
_____no_output_____
###Markdown
Define OCR transformers
###Code
# Read binary as image
binary_to_image = BinaryToImage()
binary_to_image.setInputCol("content")
binary_to_image.setOutputCol("image")
# Scale image
scaler = ImageScaler()
scaler.setInputCol("image")
scaler.setOutputCol("scaled_image")
scaler.setScaleFactor(1.5)
# Binarize using adaptive tresholding
binarizer = ImageAdaptiveThresholding()
binarizer.setInputCol("scaled_image")
binarizer.setOutputCol("binarized_image")
binarizer.setBlockSize(91)
binarizer.setOffset(20)
# Apply morphology opening
opening = ImageMorphologyOperation()
opening.setKernelShape(KernelShape.SQUARE)
opening.setOperation(MorphologyOperationType.OPENING)
opening.setKernelSize(2)
opening.setInputCol("binarized_image")
opening.setOutputCol("opening_image")
# Remove small objects
remove_objects = ImageRemoveObjects()
remove_objects.setInputCol("binarized_image")
remove_objects.setOutputCol("corrected_image")
remove_objects.setMaxSizeObject(1000)
remove_objects.setMinSizeObject(None)
# Run tesseract OCR for each region
ocr_corrected = ImageToText()
ocr_corrected.setInputCol("corrected_image")
ocr_corrected.setOutputCol("text_corrected")
ocr_corrected.setPositionsCol("positions_corrected")
ocr_corrected.setConfidenceThreshold(75)
ocr = ImageToText()
ocr.setInputCol("image")
ocr.setOutputCol("text")
# OCR pipeline
pipeline = PipelineModel(stages=[
binary_to_image,
scaler,
binarizer,
opening,
remove_objects,
ocr,
ocr_corrected
])
###Output
_____no_output_____
###Markdown
Read image with noised background
###Code
import pkg_resources
imagePath = "data/images/grey_background.png"
image_df = spark.read.format("binaryFile").load(imagePath).cache()
image_df.show()
###Output
+--------------------+-------------------+------+--------------------+
| path| modificationTime|length| content|
+--------------------+-------------------+------+--------------------+
|file:/Users/nmeln...|2020-02-12 09:57:15|177501|[FF D8 FF E0 00 1...|
+--------------------+-------------------+------+--------------------+
###Markdown
Run OCR pipelines
###Code
result = pipeline \
.transform(image_df).cache()
###Output
_____no_output_____
###Markdown
Results of simple TesseractOCR
###Code
print("\n".join([row.text for row in result.select("text").collect()]))
###Output
Question: “What would you do as a result of seeing this label posted on a new car's
window? Mark all that apply’
Vertical [Horizontal
Sample size 233 223
Write down the MPG rating(s) of the automobile 55% 57%
Write down or record the particular datal was interested in | 53% 60%
Visit the website for more information 45% 45%
Write down the EPA-assigned grade of the automobile 43% x
Scan the QR code (2-D barcode) with my smartphone 15% 13%
Ignore the label and move on to other available information | 14% 7%
Other 6% 2%
###Markdown
Results with preprocessing
###Code
print("\n".join([row.text_corrected for row in result.select("text_corrected").collect()]))
###Output
Question: “What would you do as a result of seeing this label posted on a new car's
window? Mark all that apply”.
Vertical Horizontal
Sample size 233 223
Write down the MPG rating(s) of the automobile 55% 57%
Write down or record the particular data | was interested in 53% 60%
Visit the website for more information 45% 45%
Write down the EPA-assigned grade of the automobile 43% Xx
Scan the QR code (2-D barcode) with my smartphone 15% 13%
(gnore the label and move on to other available information 14% 7%
Other 6% 2%
###Markdown
Display original and corrected images
###Code
for r in result.distinct().collect():
print("Original: %s" % r.path)
display_image(r.image)
print("Corrected: %s" % r.path)
display_image(r.corrected_image)
###Output
Original: file:/Users/nmelnik/IdeaProjects/spark-ocr/workshop/jupyter/data/images/grey_background.png
Image:
origin: file:/Users/nmelnik/IdeaProjects/spark-ocr/workshop/jupyter/data/images/grey_background.png
width: 628
height: 335
mode: 10
###Markdown
Compute score and compare results
###Code
original_text = """Question: “What would you do as a result of seeing this label posted on a new car's
window? Mark all that apply”.
Vertical Horizontal
Sample size 233 223
Write down the MPG rating(s) of the automobile 55% 57%
Write down or record the particular data I was interested in 53% 60%
Visit the website for more information 45% 45%
Write down the EPA-assigned grade of the automobile 43% X
Scan the QR code (2-D barcode) with my smartphone 15% 13%
Ignore the label and move on to other available information 14% 7%
Other 6% 2%
"""
detected = "\n".join([row.text for row in result.collect()])
detected_corrected = "\n".join([row.text_corrected for row in result.collect()])
# Compute scores
tesseract_score = score(original_text, detected)
corrected_score = score(original_text, detected_corrected)
print("Score for simple Tesseract: {0}".format(tesseract_score))
print("Score Spark NLP: {0}".format(corrected_score))
###Output
Score for simple Tesseract: 0.8130081300813008
Score Spark NLP: 0.9847036328871893
###Markdown
OCR example for recogneze text from table with grey bacground for some cells Install spark-ocr python packgeNeed specify path to `spark-ocr-assembly-[version].jar` or `secret`
###Code
secret = ""
license = ""
version = secret.split("-")[0]
spark_ocr_jar_path = "../../target/scala-2.11"
%%bash
if python -c 'import google.colab' &> /dev/null; then
echo "Run on Google Colab!"
echo "Install Open JDK"
apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64"
export PATH="$JAVA_HOME/bin:$PATH"
java -version
fi
import sys
import os
if 'google.colab' in sys.modules:
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
# install from PYPI using secret
#%pip install spark-ocr==$version\.spark24 --extra-index-url=https://pypi.johnsnowlabs.com/$secret --upgrade
# or install from local path
#%pip install ../../python/dist/spark-ocr-1.9.0.tar.gz
###Output
_____no_output_____
###Markdown
Initialization of spark sessionNeed specify path to `spark-ocr-assembly-[version].jar` or `secret`
###Code
from pyspark.sql import SparkSession
from sparkocr import start
if license:
os.environ['JSL_OCR_LICENSE'] = license
spark = start(secret=secret, jar_path=spark_ocr_jar_path)
spark
###Output
Spark version: 2.4.4
Spark NLP version: 2.5.5
Spark OCR version: 1.9.0
###Markdown
Imports
###Code
from pyspark.ml import PipelineModel
from pyspark.sql import functions as F
from sparkocr.transformers import *
from sparkocr.enums import *
from sparkocr.utils import display_image
from sparkocr.metrics import score
###Output
_____no_output_____
###Markdown
Define OCR transformers
###Code
# Read binary as image
binary_to_image = BinaryToImage()
binary_to_image.setInputCol("content")
binary_to_image.setOutputCol("image")
# Scale image
scaler = ImageScaler()
scaler.setInputCol("image")
scaler.setOutputCol("scaled_image")
scaler.setScaleFactor(1.5)
# Binarize using adaptive tresholding
binarizer = ImageAdaptiveThresholding()
binarizer.setInputCol("scaled_image")
binarizer.setOutputCol("binarized_image")
binarizer.setBlockSize(91)
binarizer.setOffset(20)
# Apply morphology opening
opening = ImageMorphologyOperation()
opening.setKernelShape(KernelShape.SQUARE)
opening.setOperation(MorphologyOperationType.OPENING)
opening.setKernelSize(2)
opening.setInputCol("binarized_image")
opening.setOutputCol("opening_image")
# Remove small objects
remove_objects = ImageRemoveObjects()
remove_objects.setInputCol("binarized_image")
remove_objects.setOutputCol("corrected_image")
remove_objects.setMaxSizeObject(1000)
remove_objects.setMinSizeObject(None)
# Run OCR for each region
ocr_corrected = ImageToText()
ocr_corrected.setInputCol("corrected_image")
ocr_corrected.setOutputCol("text_corrected")
ocr_corrected.setPositionsCol("positions_corrected")
ocr_corrected.setConfidenceThreshold(75)
ocr = ImageToText()
ocr.setInputCol("image")
ocr.setOutputCol("text")
# OCR pipeline
pipeline = PipelineModel(stages=[
binary_to_image,
scaler,
binarizer,
opening,
remove_objects,
ocr,
ocr_corrected
])
###Output
_____no_output_____
###Markdown
Read image with noised background
###Code
import pkg_resources
imagePath = "data/images/grey_background.png"
image_df = spark.read.format("binaryFile").load(imagePath).cache()
image_df.show()
###Output
+--------------------+-------------------+------+--------------------+
| path| modificationTime|length| content|
+--------------------+-------------------+------+--------------------+
|file:/Users/nmeln...|2020-02-12 05:57:15|177501|[FF D8 FF E0 00 1...|
+--------------------+-------------------+------+--------------------+
###Markdown
Run OCR pipelines
###Code
result = pipeline \
.transform(image_df).cache()
###Output
_____no_output_____
###Markdown
Results of simple OCR
###Code
print("\n".join([row.text for row in result.select("text").collect()]))
###Output
Question: “What would you do as a result of seeing this label posted on a new car's
window? Mark all that apply’
Vertical [Horizontal
Sample size 233 223
Write down the MPG rating(s) of the automobile 55% 57%
Write down or record the particular datal was interested in | 53% 60%
Visit the website for more information 45% 45%
Write down the EPA-assigned grade of the automobile 43% x
Scan the QR code (2-D barcode) with my smartphone 15% 13%
Ignore the label and move on to other available information | 14% 7%
Other 6% 2%
###Markdown
Results with preprocessing
###Code
print("\n".join([row.text_corrected for row in result.select("text_corrected").collect()]))
###Output
Question: “What would you do as a result of seeing this label posted on a new car's
window? Mark all that apply”.
Vertical Horizontal
Sample size 233 223
Write down the MPG rating(s) of the automobile 55% 57%
Write down or record the particular data | was interested in 53% 60%
Visit the website for more information 45% 45%
Write down the EPA-assigned grade of the automobile 43% Xx
Scan the QR code (2-D barcode) with my smartphone 15% 13%
(gnore the label and move on to other available information 14% 7%
Other 6% 2%
###Markdown
Display original and corrected images
###Code
for r in result.distinct().collect():
print("Original: %s" % r.path)
display_image(r.image)
print("Corrected: %s" % r.path)
display_image(r.corrected_image)
###Output
Original: file:/Users/nmelnik/IdeaProjects/spark-ocr/workshop/jupyter/data/images/grey_background.png
Image:
origin: file:/Users/nmelnik/IdeaProjects/spark-ocr/workshop/jupyter/data/images/grey_background.png
width: 628
height: 335
mode: 10
###Markdown
Compute score and compare results
###Code
original_text = """Question: “What would you do as a result of seeing this label posted on a new car's
window? Mark all that apply”.
Vertical Horizontal
Sample size 233 223
Write down the MPG rating(s) of the automobile 55% 57%
Write down or record the particular data I was interested in 53% 60%
Visit the website for more information 45% 45%
Write down the EPA-assigned grade of the automobile 43% X
Scan the QR code (2-D barcode) with my smartphone 15% 13%
Ignore the label and move on to other available information 14% 7%
Other 6% 2%
"""
detected = "\n".join([row.text for row in result.collect()])
detected_corrected = "\n".join([row.text_corrected for row in result.collect()])
# Compute scores
tesseract_score = score(original_text, detected)
corrected_score = score(original_text, detected_corrected)
print("Score for simple Tesseract: {0}".format(tesseract_score))
print("Score Spark NLP: {0}".format(corrected_score))
###Output
Score for simple Tesseract: 0.8130081300813008
Score Spark NLP: 0.9847036328871893
|
examples/python/ex4_bunny.ipynb | ###Markdown
Data Association of Noisy Stanford Bunny with Outliers
###Code
import time
import numpy as np
import open3d as o3d
from scipy.spatial.transform import Rotation
import clipper
def generate_dataset(pcfile, m, n1, n2o, outrat, sigma, T_21):
"""Generate Dataset
"""
pcd = o3d.io.read_point_cloud(pcfile)
n2 = n1 + n2o # number of points in view 2
noa = round(m * outrat) # number of outlier associations
nia = m - noa # number of inlier associations
if nia > n1:
raise ValueError("Cannot have more inlier associations "
"than there are model points. Increase"
"the number of points to sample from the"
"original point cloud model.")
# radius of outlier sphere
R = 1
# Downsample from the original point cloud, sample randomly
I = np.random.choice(len(pcd.points), n1, replace=False)
D1 = np.asarray(pcd.points)[I,:].T
# Rotate into view 2 using ground truth transformation
D2 = T_21[0:3,0:3] @ D1 + T_21[0:3,3].reshape(-1,1)
# Add noise uniformly sampled from a sigma cube around the true point
eta = np.random.uniform(low=-sigma/2., high=sigma/2., size=D2.shape)
# Add noise to view 2
D2 += eta
def randsphere(m,n,r):
from scipy.special import gammainc
X = np.random.randn(m, n)
s2 = np.sum(X**2, axis=1)
X = X * np.tile((r*(gammainc(n/2,s2/2)**(1/n)) / np.sqrt(s2)).reshape(-1,1),(1,n))
return X
# Add outliers to view 2
O2 = randsphere(n2o,3,R).T + D2.mean(axis=1).reshape(-1,1)
D2 = np.hstack((D2,O2))
# Correct associations to draw from
Agood = np.tile(np.arange(n1).reshape(-1,1),(1,2))
# Incorrect association to draw from
Abad = np.zeros((n1*n2 - n1, 2))
itr = 0
for i in range(n1):
for j in range(n2):
if i == j:
continue
Abad[itr,:] = [i, j]
itr += 1
# Sample good and bad associations to satisfy total
# num of associations with the requested outlier ratio
IAgood = np.random.choice(Agood.shape[0], nia, replace=False)
IAbad = np.random.choice(Abad.shape[0], noa, replace=False)
A = np.concatenate((Agood[IAgood,:],Abad[IAbad,:]))
# Ground truth associations
Agt = Agood[IAgood,:]
return (D1, D2, Agt, A)
m = 1000 # total number of associations in problem
n1 = 1000 # number of points used on model (i.e., seen in view 1)
n2o = 250 # number of outliers in data (i.e., seen in view 2)
outrat = 0.95 # outlier ratio of initial association set
sigma = 0.02 # uniform noise [m] range
# generate random (R,t)
T_21 = np.eye(4)
T_21[0:3,0:3] = Rotation.random().as_matrix()
T_21[0:3,3] = np.random.uniform(low=-5, high=5, size=(3,))
pcfile = '../data/bun1k.ply'
D1, D2, Agt, A = generate_dataset(pcfile, m, n1, n2o, outrat, sigma, T_21)
params = clipper.Params()
iparams = clipper.invariants.EuclideanDistanceParams()
iparams.sigma = 0.015
iparams.epsilon = 0.02
invariant = clipper.invariants.EuclideanDistance(iparams)
t0 = time.time()
M, C = clipper.score_pairwise_consistency(invariant, D1, D2, A)
t1 = time.time()
print(f"Affinity matrix creation took {t1-t0:.3f} seconds")
t0 = time.time()
soln = clipper.find_dense_cluster(M, C, params)
t1 = time.time()
Ain = clipper.select_inlier_associations(soln, A)
p = np.isin(Ain, Agt)[:,0].sum() / Ain.shape[0]
r = np.isin(Ain, Agt)[:,0].sum() / Agt.shape[0]
print(f"CLIPPER selected {Ain.shape[0]} inliers from {A.shape[0]} "
f"putative associations (precision {p:.2f}, recall {r:.2f}) in {t1-t0:.3f} s")
model = o3d.geometry.PointCloud()
model.points = o3d.utility.Vector3dVector(D1.T)
model.paint_uniform_color(np.array([0,0,1.]))
data = o3d.geometry.PointCloud()
data.points = o3d.utility.Vector3dVector(D2.T)
data.paint_uniform_color(np.array([1.,0,0]))
# corr = o3d.geometry.LineSet.create_from_point_cloud_correspondences(model, data, Ain)
# o3d.visualization.draw_geometries([model, data, corr])
p2p = o3d.pipelines.registration.TransformationEstimationPointToPoint()
That_21 = p2p.compute_transformation(model, data, o3d.utility.Vector2iVector(Ain))
def get_err(T, That):
Terr = np.linalg.inv(T) @ That
rerr = abs(np.arccos(min(max(((Terr[0:3,0:3]).trace() - 1) / 2, -1.0), 1.0)))
terr = np.linalg.norm(Terr[0:3,3])
return (rerr, terr)
get_err(T_21, That_21)
def draw_registration_result(source, target, transformation):
import copy
source_temp = copy.deepcopy(source)
target_temp = copy.deepcopy(target)
source_temp.paint_uniform_color([1, 0.706, 0])
target_temp.paint_uniform_color([0, 0.651, 0.929])
source_temp.transform(transformation)
o3d.visualization.draw_geometries([source_temp, target_temp])
draw_registration_result(model, data, That_21)
###Output
_____no_output_____
###Markdown
--- Custom Invariant FunctionFor most cases, we recommend using the provided invariants written in C++ for computational efficiency. In particular, for C++ invariant implementations, we use `OpenMP` to parallelize the computation of the affinity matrix.However, for quick tests and prototyping it can be convenient to test invariants using Python. In this case, you can extend the C++ `clipper.invariants.PairwiseInvariant` class in Python. Note that this method disables the `OpenMP` parallelization is so will be many times slower than a C++ implementation. On average, for the following Python example invariant, the `score_pairwise_consistency` method takes 6 seconds for 1000 initial associations.
###Code
class Custom(clipper.invariants.PairwiseInvariant):
def __init__(self, σ=0.06, ϵ=0.01):
clipper.invariants.PairwiseInvariant.__init__(self)
self.σ = σ
self.ϵ = ϵ
def __call__(self, ai, aj, bi, bj):
l1 = np.linalg.norm(ai - aj)
l2 = np.linalg.norm(bi - bj)
c = np.abs(l1 - l2)
return np.exp(-0.5*c**2/self.σ**2) if c < self.ϵ else 0
c = Custom(σ=0.015, ϵ=0.02)
params = clipper.Params()
t0 = time.time()
M, C = clipper.score_pairwise_consistency(c, D1, D2, A)
t1 = time.time()
print(f"Affinity matrix creation took {t1-t0:.3f} seconds")
t0 = time.time()
soln = clipper.find_dense_cluster(M, C, params)
t1 = time.time()
Ain = clipper.select_inlier_associations(soln, A)
p = np.isin(Ain, Agt)[:,0].sum() / Ain.shape[0]
r = np.isin(Ain, Agt)[:,0].sum() / Agt.shape[0]
print(f"CLIPPER selected {Ain.shape[0]} inliers from {A.shape[0]} "
f"putative associations (precision {p:.2f}, recall {r:.2f}) in {t1-t0:.3f} s")
###Output
_____no_output_____
###Markdown
Pure Python Implementation of Pairwise Consistency Scoring
###Code
def k2ij(k, n):
k += 1
l = n * (n-1) / 2 - k
o = np.floor( (np.sqrt(1 + 8*l) - 1) / 2. )
p = l - o * (o + 1) / 2
i = n - (o + 1)
j = n - p
return int(i-1), int(j-1)
def score_pairwise_consistency(invariant, D1, D2, A):
if A is None:
A = clipper.invariants.create_all_to_all(D1.shape[1], D2.shape[1])
m = A.shape[0]
M = np.eye(m)
C = np.ones((m,m))
for k in range(int(m*(m-1)/2)):
i, j = k2ij(k, m)
if A[i,0] == A[j,0] or A[i,1] == A[j,1]:
C[i,j] = C[j,i] = 0
continue
d1i = D1[:,A[i,0]]
d1j = D1[:,A[j,0]]
d2i = D2[:,A[i,1]]
d2j = D2[:,A[j,1]]
scr = invariant(d1i,d1j,d2i,d2j)
if scr > 0:
M[i,j] = M[j,i] = scr
else:
C[i,j] = C[j,i] = 0
return M, C
t0 = time.time()
_, _ = score_pairwise_consistency(c, D1, D2, A.astype('int'))
t1 = time.time()
print(f"Affinity matrix creation took {t1-t0:.3f} seconds")
###Output
_____no_output_____
###Markdown
Data Association of Noisy Stanford Bunny with Outliers
###Code
import time
import numpy as np
import open3d as o3d
from scipy.spatial.transform import Rotation
import clipper
def generate_dataset(pcfile, m, n1, n2o, outrat, sigma, T_21):
"""Generate Dataset
"""
pcd = o3d.io.read_point_cloud(pcfile)
n2 = n1 + n2o # number of points in view 2
noa = round(m * outrat) # number of outlier associations
nia = m - noa # number of inlier associations
if nia > n1:
raise ValueError("Cannot have more inlier associations "
"than there are model points. Increase"
"the number of points to sample from the"
"original point cloud model.")
# radius of outlier sphere
R = 1
# Downsample from the original point cloud, sample randomly
I = np.random.choice(len(pcd.points), n1, replace=False)
D1 = np.asarray(pcd.points)[I,:].T
# Rotate into view 2 using ground truth transformation
D2 = T_21[0:3,0:3] @ D1 + T_21[0:3,3].reshape(-1,1)
# Add noise uniformly sampled from a sigma cube around the true point
eta = np.random.uniform(low=-sigma/2., high=sigma/2., size=D2.shape)
# Add noise to view 2
D2 += eta
def randsphere(m,n,r):
from scipy.special import gammainc
X = np.random.randn(m, n)
s2 = np.sum(X**2, axis=1)
X = X * np.tile((r*(gammainc(n/2,s2/2)**(1/n)) / np.sqrt(s2)).reshape(-1,1),(1,n))
return X
# Add outliers to view 2
O2 = randsphere(n2o,3,R).T + D2.mean(axis=1).reshape(-1,1)
D2 = np.hstack((D2,O2))
# Correct associations to draw from
Agood = np.tile(np.arange(n1).reshape(-1,1),(1,2))
# Incorrect association to draw from
Abad = np.zeros((n1*n2 - n1, 2))
itr = 0
for i in range(n1):
for j in range(n2):
if i == j:
continue
Abad[itr,:] = [i, j]
itr += 1
# Sample good and bad associations to satisfy total
# num of associations with the requested outlier ratio
IAgood = np.random.choice(Agood.shape[0], nia, replace=False)
IAbad = np.random.choice(Abad.shape[0], noa, replace=False)
A = np.concatenate((Agood[IAgood,:],Abad[IAbad,:]))
# Ground truth associations
Agt = Agood[IAgood,:]
return (D1, D2, Agt, A)
m = 1000 # total number of associations in problem
n1 = 1000 # number of points used on model (i.e., seen in view 1)
n2o = 250 # number of outliers in data (i.e., seen in view 2)
outrat = 0.95 # outlier ratio of initial association set
sigma = 0.02 # uniform noise [m] range
# generate random (R,t)
T_21 = np.eye(4)
T_21[0:3,0:3] = Rotation.random().as_matrix()
T_21[0:3,3] = np.random.uniform(low=-5, high=5, size=(3,))
pcfile = '../data/bun1k.ply'
D1, D2, Agt, A = generate_dataset(pcfile, m, n1, n2o, outrat, sigma, T_21)
params = clipper.Params()
params.beta = 0.25 # this was the value in the original version of the repo
iparams = clipper.invariants.EuclideanDistanceParams()
iparams.sigma = 0.015
iparams.epsilon = 0.02
invariant = clipper.invariants.EuclideanDistance(iparams)
t0 = time.time()
M, C = clipper.score_pairwise_consistency(invariant, D1, D2, A)
t1 = time.time()
print(f"Affinity matrix creation took {t1-t0:.3f} seconds")
t0 = time.time()
soln = clipper.find_dense_cluster(M, C, params)
t1 = time.time()
Ain = clipper.select_inlier_associations(soln, A)
dense_duration = t1 - t0
p = np.isin(Ain, Agt)[:,0].sum() / Ain.shape[0]
r = np.isin(Ain, Agt)[:,0].sum() / Agt.shape[0]
print(f"CLIPPER selected {Ain.shape[0]} inliers from {A.shape[0]} "
f"putative associations (precision {p:.2f}, recall {r:.2f}) in {t1-t0:.3f} s")
model = o3d.geometry.PointCloud()
model.points = o3d.utility.Vector3dVector(D1.T)
model.paint_uniform_color(np.array([0,0,1.]))
data = o3d.geometry.PointCloud()
data.points = o3d.utility.Vector3dVector(D2.T)
data.paint_uniform_color(np.array([1.,0,0]))
# corr = o3d.geometry.LineSet.create_from_point_cloud_correspondences(model, data, Ain)
# o3d.visualization.draw_geometries([model, data, corr])
p2p = o3d.pipelines.registration.TransformationEstimationPointToPoint()
That_21 = p2p.compute_transformation(model, data, o3d.utility.Vector2iVector(Ain))
def get_err(T, That):
Terr = np.linalg.inv(T) @ That
rerr = abs(np.arccos(min(max(((Terr[0:3,0:3]).trace() - 1) / 2, -1.0), 1.0)))
terr = np.linalg.norm(Terr[0:3,3])
return (rerr, terr)
get_err(T_21, That_21)
def draw_registration_result(source, target, transformation):
import copy
source_temp = copy.deepcopy(source)
target_temp = copy.deepcopy(target)
source_temp.paint_uniform_color([1, 0.706, 0])
target_temp.paint_uniform_color([0, 0.651, 0.929])
source_temp.transform(transformation)
o3d.visualization.draw_geometries([source_temp, target_temp])
draw_registration_result(model, data, That_21)
###Output
_____no_output_____
###Markdown
Exploiting Sparsity in Affinity matrixwe run the above with sparsity-aware version of clipper
###Code
params = clipper.Params()
iparams = clipper.invariants.EuclideanDistanceParams()
iparams.sigma = 0.015
iparams.epsilon = 0.02
invariant = clipper.invariants.EuclideanDistance(iparams)
t0 = time.time()
M, C = clipper.score_sparse_pairwise_consistency(invariant, D1, D2, A)
t1 = time.time()
print(f"Affinity matrix creation took {t1-t0:.3f} seconds")
t0 = time.time()
soln = clipper.find_dense_cluster_of_sparse_graph(M, C, params)
t1 = time.time()
Ain = clipper.select_inlier_associations(soln, A)
sparse_duration = t1 - t0
p = np.isin(Ain, Agt)[:,0].sum() / Ain.shape[0]
r = np.isin(Ain, Agt)[:,0].sum() / Agt.shape[0]
print(f"sparse-aware CLIPPER selected {Ain.shape[0]} inliers from {A.shape[0]} "
f"putative associations (precision {p:.2f}, recall {r:.2f}) in {t1-t0:.3f} s")
print(f"Speed-up: {dense_duration / sparse_duration}")
p2p = o3d.pipelines.registration.TransformationEstimationPointToPoint()
That_21 = p2p.compute_transformation(model, data, o3d.utility.Vector2iVector(Ain))
get_err(T_21, That_21)
draw_registration_result(model, data, That_21)
###Output
_____no_output_____
###Markdown
--- Custom Invariant FunctionFor most cases, we recommend using the provided invariants written in C++ for computational efficiency. In particular, for C++ invariant implementations, we use `OpenMP` to parallelize the computation of the affinity matrix.However, for quick tests and prototyping it can be convenient to test invariants using Python. In this case, you can extend the C++ `clipper.invariants.PairwiseInvariant` class in Python. Note that this method disables the `OpenMP` parallelization is so will be many times slower than a C++ implementation. On average, for the following Python example invariant, the `score_pairwise_consistency` method takes 6 seconds for 1000 initial associations.
###Code
class Custom(clipper.invariants.PairwiseInvariant):
def __init__(self, σ=0.06, ϵ=0.01):
clipper.invariants.PairwiseInvariant.__init__(self)
self.σ = σ
self.ϵ = ϵ
def __call__(self, ai, aj, bi, bj):
l1 = np.linalg.norm(ai - aj)
l2 = np.linalg.norm(bi - bj)
c = np.abs(l1 - l2)
return np.exp(-0.5*c**2/self.σ**2) if c < self.ϵ else 0
c = Custom(σ=0.015, ϵ=0.02)
params = clipper.Params()
t0 = time.time()
M, C = clipper.score_pairwise_consistency(c, D1, D2, A)
t1 = time.time()
print(f"Affinity matrix creation took {t1-t0:.3f} seconds")
t0 = time.time()
soln = clipper.find_dense_cluster(M, C, params)
t1 = time.time()
Ain = clipper.select_inlier_associations(soln, A)
p = np.isin(Ain, Agt)[:,0].sum() / Ain.shape[0]
r = np.isin(Ain, Agt)[:,0].sum() / Agt.shape[0]
print(f"CLIPPER selected {Ain.shape[0]} inliers from {A.shape[0]} "
f"putative associations (precision {p:.2f}, recall {r:.2f}) in {t1-t0:.3f} s")
###Output
_____no_output_____
###Markdown
Pure Python Implementation of Pairwise Consistency Scoring
###Code
def k2ij(k, n):
k += 1
l = n * (n-1) / 2 - k
o = np.floor( (np.sqrt(1 + 8*l) - 1) / 2. )
p = l - o * (o + 1) / 2
i = n - (o + 1)
j = n - p
return int(i-1), int(j-1)
def score_pairwise_consistency(invariant, D1, D2, A):
if A is None:
A = clipper.invariants.create_all_to_all(D1.shape[1], D2.shape[1])
m = A.shape[0]
M = np.eye(m)
C = np.ones((m,m))
for k in range(int(m*(m-1)/2)):
i, j = k2ij(k, m)
if A[i,0] == A[j,0] or A[i,1] == A[j,1]:
C[i,j] = C[j,i] = 0
continue
d1i = D1[:,A[i,0]]
d1j = D1[:,A[j,0]]
d2i = D2[:,A[i,1]]
d2j = D2[:,A[j,1]]
scr = invariant(d1i,d1j,d2i,d2j)
if scr > 0:
M[i,j] = M[j,i] = scr
else:
C[i,j] = C[j,i] = 0
return M, C
t0 = time.time()
_, _ = score_pairwise_consistency(c, D1, D2, A.astype('int'))
t1 = time.time()
print(f"Affinity matrix creation took {t1-t0:.3f} seconds")
###Output
_____no_output_____ |
docs/contents/tools/classes/openmm_Context/is_openmm_Context.ipynb | ###Markdown
Is openmm.Context
###Code
from molsysmt.tools import openmm_Context
#openmm_Context.is_openmm_Context()
###Output
_____no_output_____ |
notebooks/bootstrap-pipeline.ipynb | ###Markdown
data processing
###Code
"""
'dataset.pkl' contains your dataset of features, it is a dict with unique key corresponding to each case.
"""
with open('dataset.pkl', 'rb') as file:
data = pickle.load(file)
"""
'regex_match.pkl' contains meddra terms matched to each case using a regex engine.
it is a dict with the same key as for 'dataset.pkl'.
"""
with open('regex_match.pkl', 'rb') as file:
regex_match = pickle.load(file)
"""
'tags.pkl' contains the meddra tags that correspond to your dataset, it is a dict with the same key as for 'dataset.pkl'.
We only keep the most common terms (i.e with number of occurences greater than the min_occurence parameter.)
"""
with open('tags.pkl', 'rb') as file:
tags = pickle.load(file)
X = []
Y = []
re_match = []
"""
We build the X and Y arrays from our features and tags.
X components are numeric vectors of features, it can be a mixture of text
vectorisation (using TF-IDF or any text embedding algorithm), numerical
features (age, weigh,...) and one hot encoding of categorical features (gender).
"""
"""
/!\ If you use a non pre trained text vectorization model, you should compute it on the train
sample after train-test split (next cell) to avoid introducing bias in your evaluation. Indeed,
if you compute for instance TF-IDF on the whole dataset (ie before splitting) test data will be
used for word frequency computation.
"""
for key, value in data.items():
X.append(value)
Y.append(tags[key])
re_match.append(regex_match[key])
###Output
_____no_output_____
###Markdown
Bootstrap
###Code
def export_results(pred, true, model_name):
"""
Function for simulations results metrics computation and export.
"""
fpr, tpr, _ = roc_curve(true, pred)
roc_auc = auc(fpr, tpr)
p, r, t = precision_recall_curve(true, pred)
F1 = 2 * (p * r) / (p + r)
F1 = [x if x==x else 0 for x in F1]
th = t[numpy.argmax(F1)]
tn, fp, fn, tp = confusion_matrix(true, [0 if x < th else 1 for x in pred]).ravel()
with open('result_bootstrap.txt', 'a') as file:
file.write(
'\t'.join(
[
model_name,
str(max(F1)),
str(tn),
str(tp),
str(fn),
str(fp),
str(roc_auc),
'ROC_fpr',
'$'.join([str(x) for x in fpr]),
'ROC_tpr',
'$'.join([str(x) for x in tpr])
]
) + '\n'
)
import warnings
warnings.filterwarnings("ignore")
n_split = 1000
"""
We use ShuffleSplit to generate bootstrap samples indexes.
"""
rs = ShuffleSplit(n_splits=n_split, test_size=.1, random_state=0)
for train_index, test_index in tqdm(rs.split(X, Y)):
"""
We build our train test sets
"""
X_train, X_test = [X[i] for i in train_index], [X[i] for i in test_index]
Y_train, Y_test = [Y[i] for i in train_index], [Y[i] for i in test_index]
regex_train, regex_test = [regex_match[i] for i in train_index], [regex_match[i] for i in test_index]
# we binarize the tags as well as the regex matches
Y_train = tag_binarizer.fit_transform(Y_train)
Y_test = tag_binarizer.transform(Y_test)
regex_test_bin = tag_binarizer.transform(regex_test)
regex_test_filtered = [[s for s in l if s in tag_counts.keys() and tag_counts[s] >= min_occurence] for l in regex_test]
"""
We train and test each model on the current split and then export the results.
"""
# regex only
regex_test_flat = numpy.hstack(regex_test_bin)
export_results(regex_test_flat, Y_test.flatten('C'), 'regex')
# train dataset
# lgbm
lgbm = OneVsRestClassifier(
LGBMClassifier(
max_depth=2,
n_estimators=50
),
n_jobs=10
)
lgbm.fit(X_train_vec, Y_train)
pred_test = lgbm.predict_proba(X_test_vec)
pred_test_flat = numpy.hstack(pred_test)
export_results(pred_test_flat, Y_test.flatten('C'), 'lgbm')
# lgbm + regex
pred_test_regex = pred_test + regex_test_bin
pred_test_regex = numpy.minimum(pred_test_regex, numpy.ones(pred_test_regex.shape))
pred_test_regex_flat = numpy.hstack(pred_test_regex)
export_results(pred_test_regex_flat, Y_test.flatten('C'), 'lgbm + regex')
# Random Forest
clf = RandomForestClassifier(
n_estimators=200,
max_depth=4,
n_jobs=8
)
clf.fit(X_train_vec, Y_train)
pred_test = clf.predict_proba(X_test_vec)
pred_test_flat = numpy.vstack(pred_test)
pred_test_flat = [t[1] for t in pred_test_flat]
export_results(pred_test_flat, Y_test.flatten('F'), 'random_forests')
# SVM
svc = OneVsRestClassifier(
SVC(probability=True),
n_jobs=8
)
svc.fit(X_train_vec, Y_train)
pred_test = svc.predict_proba(X_test_vec)
pred_test_flat = numpy.hstack(pred_test)
export_results(pred_test_flat, Y_test.flatten('C'), 'svm')
# logit
logit = OneVsRestClassifier(
LogisticRegression(
multi_class='ovr'
),
n_jobs=8
)
logit.fit(X_train_vec, Y_train)
pred_test = logit.predict_proba(X_test_vec)
pred_test_flat = numpy.hstack(pred_test)
export_results(pred_test_flat, Y_test.flatten('C'), 'logit')
###Output
_____no_output_____ |
python-scripts/data_analytics_learn/L1_Starter_Code.ipynb | ###Markdown
Before we get started, a couple of reminders to keep in mind when using iPython notebooks:- Remember that you can see from the left side of a code cell when it was last run if there is a number within the brackets.- When you start a new notebook session, make sure you run all of the cells up to the point where you last left off. Even if the output is still visible from when you ran the cells in your previous session, the kernel starts in a fresh state so you'll need to reload the data, etc. on a new session.- The previous point is useful to keep in mind if your answers do not match what is expected in the lesson's quizzes. Try reloading the data and run all of the processing steps one by one in order to make sure that you are working with the same variables and data that are at each quiz stage. Load Data from CSVs
###Code
import unicodecsv
## Longer version of code (replaced with shorter, equivalent version below)
# enrollments = []
# f = open('enrollments.csv', 'rb')
# reader = unicodecsv.DictReader(f)
# for row in reader:
# enrollments.append(row)
# f.close()
def read_csv(filename):
with open(filename, 'rb') as f:
reader = unicodecsv.DictReader(f)
records = list(reader)
return records
#####################################
# 1 #
#####################################
## Read in the data from daily_engagement.csv and project_submissions.csv
## and store the results in the below variables.
## Then look at the first row of each table.
enrollments = read_csv('enrollments.csv')
daily_engagement = read_csv('daily_engagement.csv')
project_submissions = read_csv('project_submissions.csv')
###Output
_____no_output_____
###Markdown
Fixing Data Types
###Code
from datetime import datetime as dt
# Takes a date as a string, and returns a Python datetime object.
# If there is no date given, returns None
def parse_date(date):
if date == '':
return None
else:
return dt.strptime(date, '%Y-%m-%d')
# Takes a string which is either an empty string or represents an integer,
# and returns an int or None.
def parse_maybe_int(i):
if i == '':
return None
else:
return int(i)
# Clean up the data types in the enrollments table
for enrollment in enrollments:
enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])
enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel'])
enrollment['is_canceled'] = enrollment['is_canceled'] == 'True'
enrollment['is_udacity'] = enrollment['is_udacity'] == 'True'
enrollment['join_date'] = parse_date(enrollment['join_date'])
enrollments[0]
# Clean up the data types in the engagement table
for engagement_record in daily_engagement:
engagement_record['lessons_completed'] = int(float(engagement_record['lessons_completed']))
engagement_record['num_courses_visited'] = int(float(engagement_record['num_courses_visited']))
engagement_record['projects_completed'] = int(float(engagement_record['projects_completed']))
engagement_record['total_minutes_visited'] = float(engagement_record['total_minutes_visited'])
engagement_record['utc_date'] = parse_date(engagement_record['utc_date'])
daily_engagement[0]
# Clean up the data types in the submissions table
for submission in project_submissions:
submission['completion_date'] = parse_date(submission['completion_date'])
submission['creation_date'] = parse_date(submission['creation_date'])
project_submissions[0]
###Output
_____no_output_____
###Markdown
Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur. Investigating the Data
###Code
#####################################
# 2 #
#####################################
## Find the total number of rows and the number of unique students (account keys)
## in each table.
###Output
_____no_output_____
###Markdown
Problems in the Data
###Code
#####################################
# 3 #
#####################################
## Rename the "acct" column in the daily_engagement table to "account_key".
###Output
_____no_output_____
###Markdown
Missing Engagement Records
###Code
#####################################
# 4 #
#####################################
## Find any one student enrollments where the student is missing from the daily engagement table.
## Output that enrollment.
###Output
_____no_output_____
###Markdown
Checking for More Problem Records
###Code
#####################################
# 5 #
#####################################
## Find the number of surprising data points (enrollments missing from
## the engagement table) that remain, if any.
###Output
_____no_output_____
###Markdown
Tracking Down the Remaining Problems
###Code
# Create a set of the account keys for all Udacity test accounts
udacity_test_accounts = set()
for enrollment in enrollments:
if enrollment['is_udacity']:
udacity_test_accounts.add(enrollment['account_key'])
len(udacity_test_accounts)
# Given some data with an account_key field, removes any records corresponding to Udacity test accounts
def remove_udacity_accounts(data):
non_udacity_data = []
for data_point in data:
if data_point['account_key'] not in udacity_test_accounts:
non_udacity_data.append(data_point)
return non_udacity_data
# Remove Udacity test accounts from all three tables
non_udacity_enrollments = remove_udacity_accounts(enrollments)
non_udacity_engagement = remove_udacity_accounts(daily_engagement)
non_udacity_submissions = remove_udacity_accounts(project_submissions)
print len(non_udacity_enrollments)
print len(non_udacity_engagement)
print len(non_udacity_submissions)
###Output
_____no_output_____
###Markdown
Refining the Question
###Code
#####################################
# 6 #
#####################################
## Create a dictionary named paid_students containing all students who either
## haven't canceled yet or who remained enrolled for more than 7 days. The keys
## should be account keys, and the values should be the date the student enrolled.
paid_students =
###Output
_____no_output_____
###Markdown
Getting Data from First Week
###Code
# Takes a student's join date and the date of a specific engagement record,
# and returns True if that engagement record happened within one week
# of the student joining.
def within_one_week(join_date, engagement_date):
time_delta = engagement_date - join_date
return time_delta.days < 7
#####################################
# 7 #
#####################################
## Create a list of rows from the engagement table including only rows where
## the student is one of the paid students you just found, and the date is within
## one week of the student's join date.
paid_engagement_in_first_week =
###Output
_____no_output_____
###Markdown
Exploring Student Engagement
###Code
from collections import defaultdict
# Create a dictionary of engagement grouped by student.
# The keys are account keys, and the values are lists of engagement records.
engagement_by_account = defaultdict(list)
for engagement_record in paid_engagement_in_first_week:
account_key = engagement_record['account_key']
engagement_by_account[account_key].append(engagement_record)
# Create a dictionary with the total minutes each student spent in the classroom during the first week.
# The keys are account keys, and the values are numbers (total minutes)
total_minutes_by_account = {}
for account_key, engagement_for_student in engagement_by_account.items():
total_minutes = 0
for engagement_record in engagement_for_student:
total_minutes += engagement_record['total_minutes_visited']
total_minutes_by_account[account_key] = total_minutes
import numpy as np
# Summarize the data about minutes spent in the classroom
total_minutes = total_minutes_by_account.values()
print 'Mean:', np.mean(total_minutes)
print 'Standard deviation:', np.std(total_minutes)
print 'Minimum:', np.min(total_minutes)
print 'Maximum:', np.max(total_minutes)
###Output
_____no_output_____
###Markdown
Debugging Data Analysis Code
###Code
#####################################
# 8 #
#####################################
## Go through a similar process as before to see if there is a problem.
## Locate at least one surprising piece of data, output it, and take a look at it.
###Output
_____no_output_____
###Markdown
Lessons Completed in First Week
###Code
#####################################
# 9 #
#####################################
## Adapt the code above to find the mean, standard deviation, minimum, and maximum for
## the number of lessons completed by each student during the first week. Try creating
## one or more functions to re-use the code above.
###Output
_____no_output_____
###Markdown
Number of Visits in First Week
###Code
######################################
# 10 #
######################################
## Find the mean, standard deviation, minimum, and maximum for the number of
## days each student visits the classroom during the first week.
###Output
_____no_output_____
###Markdown
Splitting out Passing Students
###Code
######################################
# 11 #
######################################
## Create two lists of engagement data for paid students in the first week.
## The first list should contain data for students who eventually pass the
## subway project, and the second list should contain data for students
## who do not.
subway_project_lesson_keys = ['746169184', '3176718735']
passing_engagement =
non_passing_engagement =
###Output
_____no_output_____
###Markdown
Comparing the Two Student Groups
###Code
######################################
# 12 #
######################################
## Compute some metrics you're interested in and see how they differ for
## students who pass the subway project vs. students who don't. A good
## starting point would be the metrics we looked at earlier (minutes spent
## in the classroom, lessons completed, and days visited).
###Output
_____no_output_____
###Markdown
Making Histograms
###Code
######################################
# 13 #
######################################
## Make histograms of the three metrics we looked at earlier for both
## students who passed the subway project and students who didn't. You
## might also want to make histograms of any other metrics you examined.
###Output
_____no_output_____
###Markdown
Improving Plots and Sharing Findings
###Code
######################################
# 14 #
######################################
## Make a more polished version of at least one of your visualizations
## from earlier. Try importing the seaborn library to make the visualization
## look better, adding axis labels and a title, and changing one or more
## arguments to the hist() function.
###Output
_____no_output_____ |
datetime.ipynb | ###Markdown
Python Date Time module
###Code
# core module
import datetime as dt
dir(dt)
###Output
_____no_output_____
###Markdown
--- datetime.date* Only date
###Code
today = dt.date.today()
today
my_bday = dt.date(1992, 5, 15)
my_bday
###Output
_____no_output_____
###Markdown
Convert from UNIX ts
###Code
# 11/02/2012 @ 12:00am (UTC)
ts = 1328918400
my_date = dt.date.fromtimestamp(ts)
my_date
print(f'year: {my_date.year}')
print(f'month: {my_date.month}')
print(f'day: {my_date.day}')
###Output
year: 2012
month: 2
day: 11
###Markdown
datetime.time* Only time
###Code
zero_time = dt.time()
zero_time
t1 = dt.time(13, 22, 45, 837)
t1
print(f'H: {t1.hour}')
print(f'M: {t1.minute}')
print(f'S: {t1.second}')
print(f'μs: {t1.microsecond}')
###Output
H: 13
M: 22
S: 45
μs: 837
###Markdown
datetime.datetime* Combination of both date and time
###Code
from datetime import datetime
now = datetime.now()
now
###Output
_____no_output_____
###Markdown
> `datetime(year, month, day, hour, minute, second, microsecond)`
###Code
my_dt = datetime(2009, 10, 19, 14, 55, 34, 2344)
my_dt
print(my_dt)
print(f'year: {my_dt.year}')
print(f'month: {my_dt.month}')
print(f'day: {my_dt.day}')
print(f'hour: {my_dt.hour}')
print(f'minute: {my_dt.minute}')
print(f'second: {my_dt.second}')
print(f'microsecond: {my_dt.microsecond}')
print(f'UNIX ts: {my_dt.timestamp()}')
my_dt.time() # returns datetime.time
my_dt.date() # returns datetime.date
# 03/29/2009 @ 12:34pm (UTC)
ts = 1238330084
t2 = dt.fromtimestamp(ts)
t2
###Output
_____no_output_____
###Markdown
datetime.timedelta* time difference b/w two datetimes
###Code
from datetime import datetime, date, timedelta
t1 = date(year = 2018, month = 7, day = 12)
t2 = date(year = 2017, month = 12, day = 23)
delta = t1 - t2
print(delta)
today = datetime.now()
bday = datetime(1993, 5, 15, 12, 5, 0)
age = today - bday
print(age)
date_diff = timedelta(days = 5, hours = 1, seconds = 33, microseconds = 233423)
print(date_diff.total_seconds())
###Output
435633.233423
###Markdown
Format datetime* `strftime()` : Convert `datetime` to `str` format - The `strftime()` method is defined under classes `date`, `datetime` and `time`.* `strptime()` : Parse `str` date to `datetime` object Directive Meaning Example Notes %a Weekday as locale’s abbreviated name. Sun, Mon, …, Sat (en_US); So, Mo, …, Sa (de_DE) (1) %A Weekday as locale’s full name. Sunday, Monday, …, Saturday (en_US); Sonntag, Montag, …, Samstag (de_DE) (1) %w Weekday as a decimal number, where 0 is Sunday and 6 is Saturday. 0, 1, …, 6 %d Day of the month as a zero-padded decimal number. 01, 02, …, 31 %b Month as locale’s abbreviated name. Jan, Feb, …, Dec (en_US); Jan, Feb, …, Dez (de_DE) (1) %B Month as locale’s full name. January, February, …, December (en_US); Januar, Februar, …, Dezember (de_DE) (1) %m Month as a zero-padded decimal number. 01, 02, …, 12 %y Year without century as a zero-padded decimal number. 00, 01, …, 99 %Y Year with century as a decimal number. 1970, 1988, 2001, 2013 %H Hour (24-hour clock) as a zero-padded decimal number. 00, 01, …, 23 %I Hour (12-hour clock) as a zero-padded decimal number. 01, 02, …, 12 %p Locale’s equivalent of either AM or PM. AM, PM (en_US); am, pm (de_DE) (1), (2) %M Minute as a zero-padded decimal number. 00, 01, …, 59 %S Second as a zero-padded decimal number. 00, 01, …, 59 (3) %f Microsecond as a decimal number, zero-padded on the left. 000000, 000001, …, 999999 (4) %z UTC offset in the form +HHMM or -HHMM (empty string if the the object is naive). (empty), +0000, -0400, +1030 (5) %Z Time zone name (empty string if the object is naive). (empty), UTC, EST, CST %j Day of the year as a zero-padded decimal number. 001, 002, …, 366 %U Week number of the year (Sunday as the first day of the week) as a zero padded decimal number. All days in a new year preceding the first Sunday are considered to be in week 0. 00, 01, …, 53 (6) %W Week number of the year (Monday as the first day of the week) as a decimal number. All days in a new year preceding the first Monday are considered to be in week 0. 00, 01, …, 53 (6) %c Locale’s appropriate date and time representation. Tue Aug 16 21:30:00 1988 (en_US); Di 16 Aug 21:30:00 1988 (de_DE) (1) %x Locale’s appropriate date representation. 08/16/88 (None); 08/16/1988 (en_US); 16.08.1988 (de_DE) (1) %X Locale’s appropriate time representation. 21:30:00 (en_US); 21:30:00 (de_DE) (1) %% A literal '%' character. % **Notes:**1. Because the format depends on the current locale, care should be taken when making assumptions about the output value. Field orderings will vary (for example, “month/day/year” versus “day/month/year”), and the output may contain Unicode characters encoded using the locale’s default encoding (for example, if the current locale is ja_JP, the default encoding could be any one of eucJP, SJIS, or utf-8; use locale.getlocale() to determine the current locale’s encoding).2. When used with the strptime() method, the %p directive only affects the output hour field if the %I directive is used to parse the hour.3. Unlike the time module, the datetime module does not support leap seconds.4. %f is an extension to the set of format characters in the C standard (but implemented separately in datetime objects, and therefore always available). When used with the strptime() method, the %f directive accepts from one to six digits and zero pads on the right._New in version 2.6._5. For a naive object, the %z and %Z format codes are replaced by empty strings. For an aware object: %z utcoffset() is transformed into a 5-character string of the form +HHMM or -HHMM, where HH is a 2-digit string giving the number of UTC offset hours, and MM is a 2-digit string giving the number of UTC offset minutes. For example, if utcoffset() returns timedelta(hours=-3, minutes=-30), %z is replaced with the string '-0330'. %Z If tzname() returns None, %Z is replaced by an empty string. Otherwise %Z is replaced by the returned value, which must be a string.6. When used with the strptime() method, %U and %W are only used in calculations when the day of the week and the year are specified.
###Code
now = datetime.now()
###Output
_____no_output_____
###Markdown
Format Date
###Code
fmt_1 = '%d/%m/%Y %H:%M:%S'
now.strftime(fmt_1)
fmt_2 = '%A, %d %B %Y, %X'
now.strftime(fmt_2)
now.strftime('%c')
###Output
_____no_output_____
###Markdown
Parse Date
###Code
datetime.strptime('22 Apr 2019 15:35:31', '%d %b %Y %H:%M:%S')
datetime.strptime('22/08/2019', '%d/%m/%Y')
###Output
_____no_output_____
###Markdown
Time Zones
###Code
from datetime import datetime
import pytz
now = datetime.now()
now
now_utc = datetime.now(tz=pytz.UTC)
now_utc
la_tz = pytz.timezone("America/Los_Angeles")
la_tz.localize(now)
now_utc.strftime('%d/%m/%Y %H:%M:%S %Z')
now.strftime('%d/%m/%Y %H:%M:%S %Z') # no timezone info for `now` object
###Output
_____no_output_____
###Markdown
All timezones in `pytz`
###Code
pytz.all_timezones_set
###Output
_____no_output_____
###Markdown
Accessing & Grouping
###Code
ts.mean()
ts.loc['2011-01-01 03:00:00' : '2011-01-02 21:00:00'].mean()
ts.resample('D').mean()
###Output
_____no_output_____
###Markdown
###Code
import datetime
y=datetime.datetime.now()
print(y.strftime("%y-%m-%d"))
###Output
_____no_output_____
###Markdown
datetime() Tabela especificação de formatos de datetime
###Code
from datetime import datetime
dt1 = datetime(2020, 8, 23, 10, 30, 10)
print(dt1)
from datetime import datetime
print(datetime.now().strftime("%d/%m/%Y %H:%M"))
###Output
03/06/2021 06:11
###Markdown
.hour
###Code
from datetime import datetime
dt1 = datetime(2020, 8, 23, 10, 30, 10)
print("Hora = ", dt1.hour)
###Output
Hora = 10
###Markdown
.minute
###Code
from datetime import datetime
dt1 = datetime(2020, 8, 23, 10, 30, 10)
print("Minuto = ", dt1.minute)
###Output
Minuto = 30
###Markdown
.second
###Code
from datetime import datetime
dt1 = datetime(2020, 8, 23, 10, 30, 10)
print("Segundo = ", dt1.second)
###Output
Segundo = 10
###Markdown
.microsecond
###Code
from datetime import datetime
dt1 = datetime(2020, 8, 23, 10, 30, 10, 50)
print ('Microsegundo:', dt1.microsecond)
###Output
Microsegundo: 50
###Markdown
.now()
###Code
from datetime import datetime
agora = datetime.now()
print(agora)
###Output
2021-06-03 06:28:23.306854
###Markdown
time()
###Code
from datetime import datetime, time
dt1 = datetime(2020, 8, 23, 10, 30, 10)
print(dt1.time())
###Output
10:30:10
###Markdown
today()
###Code
from datetime import date
a = date.today()
print('Hoje é: {}'.format(a))
###Output
Hoje é: 2021-06-14
###Markdown
.day()
###Code
from datetime import date
a = date.today().day
print('O dia é: {}'.format(a))
###Output
O dia é: 14
###Markdown
.month()
###Code
from datetime import date
a = date.today().month
print('O mês é: {}'.format(a))
###Output
O mês é: 6
###Markdown
.year()
###Code
from datetime import date
a = date.today().year
print('Estamos no ano de: {}'.format(a))
###Output
Estamos no ano de: 2021
###Markdown
strptime()Converte uma string em formato data time em uma tupla
###Code
from datetime import datetime
print(datetime.strptime('20200823', '%Y%m%d'))
data_e_hora_em_texto = '01/03/2018 12:30'
data_e_hora = datetime.strptime(data_e_hora_em_texto, '%d/%m/%Y %H:%M')
print(data_e_hora)
###Output
2020-08-23 00:00:00
2018-03-01 12:30:00
|
db2.ipynb | ###Markdown
DB2 Jupyter Notebook ExtensionsVersion: 2021-11-26This code is imported as a Jupyter notebook extension in any notebooks you create with DB2 code in it. Place the following line of code in any notebook that you want to use these commands with:&37;run db2.ipynbThis code defines a Jupyter/Python magic command called `%sql` which allows you to execute DB2 specific calls to the database. There are other packages available for manipulating databases, but this one has been specificallydesigned for demonstrating a number of the SQL features available in DB2.There are two ways of executing the `%sql` command. A single line SQL statement would use theline format of the magic command:%sql SELECT * FROM EMPLOYEEIf you have a large block of sql then you would place the %%sql command at the beginning of the block and thenplace the SQL statements into the remainder of the block. Using this form of the `%%sql` statement means that thenotebook cell can only contain SQL and no other statements.%%sqlSELECT * FROM EMPLOYEEORDER BY LASTNAMEYou can have multiple lines in the SQL block (`%%sql`). The default SQL delimiter is the semi-column (`;`).If you have scripts (triggers, procedures, functions) that use the semi-colon as part of the script, you will need to use the `-d` option to change the delimiter to an at "`@`" sign. %%sql -dSELECT * FROM EMPLOYEE@CREATE PROCEDURE ...@The `%sql` command allows most DB2 commands to execute and has a special version of the CONNECT statement. A CONNECT by itself will attempt to reconnect to the database using previously used settings. If it cannot connect, it will prompt the user for additional information. The CONNECT command has the following format:%sql CONNECT TO <database> USER <userid> USING <password | ?> HOST <ip address> PORT <port number> [SSL <filename>]If you use a "`?`" for the password field, the system will prompt you for a password. This avoids typing the password as clear text on the screen. If a connection is not successful, the system will print the errormessage associated with the connect request.If the connection is successful, the parameters are saved on your system and will be used the next time yourun a SQL statement, or when you issue the %sql CONNECT command with no parameters.In addition to the -d option, there are a number different options that you can specify at the beginning of the SQL: - `-d, -delim` - Change SQL delimiter to "`@`" from "`;`" - `-q, -quiet` - Quiet results - no messages returned from the function - `-r, -array` - Return the result set as an array of values instead of a dataframe - `-j` - Format the first character column of the result set as a JSON record - `-json` - Return result set as an array of json records - `-a, -all` - Return all rows in answer set and do not limit display - `-grid` - Display the results in a scrollable grid - `-e, -echo` - Any macro expansions are displayed in an output box You can pass python variables to the `%sql` command by using the `{}` braces with the name of thevariable inbetween. Note that you will need to place proper punctuation around the variable in the event theSQL command requires it. For instance, the following example will find employee '000010' in the EMPLOYEE table.empno = '000010'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO='{empno}'The other option is to use parameter markers. What you would need to do is use the name of the variable with a colon in front of it and the program will prepare the statement and then pass the variable to Db2 when the statement is executed. This allows you to create complex strings that might contain quote characters and other special characters and not have to worry about enclosing the string with the correct quotes. Note that you do not place the quotes around the variable even though it is a string.empno = '000020'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=:empno Development SQLThe previous set of `%sql` and `%%sql` commands deals with SQL statements and commands that are run in an interactive manner. There is a class of SQL commands that are more suited to a development environment where code is iterated or requires changing input. The commands that are associated with this form of SQL are:- AUTOCOMMIT- COMMIT/ROLLBACK- PREPARE - EXECUTEAutocommit is the default manner in which SQL statements are executed. At the end of the successful completion of a statement, the results are commited to the database. There is no concept of a transaction where multiple DML/DDL statements are considered one transaction. The `AUTOCOMMIT` command allows you to turn autocommit `OFF` or `ON`. This means that the set of SQL commands run after the `AUTOCOMMIT OFF` command are executed are not commited to the database until a `COMMIT` or `ROLLBACK` command is issued.`COMMIT` (`WORK`) will finalize all of the transactions (`COMMIT`) to the database and `ROLLBACK` will undo all of the changes. If you issue a `SELECT` statement during the execution of your block, the results will reflect all of your changes. If you `ROLLBACK` the transaction, the changes will be lost.`PREPARE` is typically used in a situation where you want to repeatidly execute a SQL statement with different variables without incurring the SQL compilation overhead. For instance:```x = %sql PREPARE SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=?for y in ['000010','000020','000030']: %sql execute :x using :y````EXECUTE` is used to execute a previously compiled statement. To retrieve the error codes that might be associated with any SQL call, the following variables are updated after every call:* SQLCODE* SQLSTATE* SQLERROR - Full error message retrieved from Db2 Install Db2 Python DriverIf the ibm_db driver is not installed on your system, the subsequent Db2 commands will fail. In order to install the Db2 driver, issue the following command from a Jupyter notebook cell:```!pip install --user ibm_db``` Db2 Jupyter ExtensionsThis section of code has the import statements and global variables defined for the remainder of the functions.
###Code
#
# Set up Jupyter MAGIC commands "sql".
# %sql will return results from a DB2 select statement or execute a DB2 command
#
# IBM 2021: George Baklarz
# Version 2021-11-26
#
from __future__ import print_function
import multiprocessing
from IPython.display import HTML as pHTML, Image as pImage, display as pdisplay, Javascript as Javascript
from IPython.core.magic import (Magics, magics_class, line_magic,
cell_magic, line_cell_magic, needs_local_scope)
import ibm_db
import pandas
import ibm_db_dbi
import json
import getpass
import pickle
import time
import re
import warnings
import matplotlib
import matplotlib.pyplot as plt
warnings.filterwarnings("ignore")
_settings = {
"maxrows" : 10,
"maxgrid" : 5,
"display" : "PANDAS",
"threads" : 0,
"database" : "",
"hostname" : "localhost",
"port" : "50000",
"protocol" : "TCPIP",
"uid" : "DB2INST1",
"pwd" : "password",
"ssl" : "",
"passthru" : ""
}
_environment = {
"jupyter" : True,
"qgrid" : True
}
_display = {
'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': False,
'defaultColumnWidth': 150,
'rowHeight': 28,
'enableColumnReorder': False,
'enableTextSelectionOnCells': True,
'editable': False,
'autoEdit': False,
'explicitInitialization': True,
'maxVisibleRows': 5,
'minVisibleRows': 5,
'sortable': True,
'filterable': False,
'highlightSelectedCell': False,
'highlightSelectedRow': True
}
# Db2 and Pandas data types
_db2types = ["unknown",
"string",
"smallint",
"int",
"bigint",
"real",
"float",
"decfloat16",
"decfloat34",
"decimal",
"boolean",
"clob",
"blob",
"xml",
"date",
"time",
"timestamp"]
_pdtypes = ["object",
"string",
"Int16",
"Int32",
"Int64",
"float32",
"float64",
"float64",
"float64",
"float64",
"boolean",
"string",
"object",
"string",
"string",
"string",
"datetime64"]
# Connection settings for statements
_connected = False
_hdbc = None
_hdbi = None
_stmt = []
_stmtID = []
_stmtSQL = []
_vars = {}
_macros = {}
_flags = []
_debug = False
# Db2 Error Messages and Codes
sqlcode = 0
sqlstate = "0"
sqlerror = ""
sqlelapsed = 0
# Check to see if QGrid is installed
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
if (_environment['qgrid'] == False):
print("Warning: QGRID is unavailable for displaying results in scrollable windows.")
print(" Install QGRID if you want to enable scrolling of result sets.")
# Check if we are running in iPython or Jupyter
try:
if (get_ipython().config == {}):
_environment['jupyter'] = False
_environment['qgrid'] = False
else:
_environment['jupyter'] = True
except:
_environment['jupyter'] = False
_environment['qgrid'] = False
# Check if pandas supports data types in the data frame - Introduced in 1.3 of pandas
_pandas_dtype = False
try:
_vrm = pandas.__version__.split(".")
_version = 0
_release = 0
_modlevel = 0
if (len(_vrm) >= 1):
_version = int(_vrm[0])
if (len(_vrm) >= 2):
_release = int(_vrm[1])
if (len(_vrm) >= 3):
_modlevel = int(_vrm[2])
if (_version >= 1 and _release >= 3):
_pandas_dtype = True
else:
_pandas_dtype = False
except:
_pandas_dtype = False
if (_pandas_dtype == False):
print("Warning: PANDAS level does not support Db2 typing which will can increase memory usage.")
print(" Install PANDAS version 1.3+ for more efficient dataframe creation.")
# Check if we have parallism available
_parallel = False
try:
import multiprocessing as mp
from multiprocessing.sharedctypes import Value, Array
_parallel = True
except:
_parallel = False
if (_parallel == False):
print("Warning: Parallelism is unavailable and THREADS option will be ignored.")
print(" Install MULTIPROCESSING if you want allow multiple SQL threads to run in parallel.")
_settings["threads"] = 0
#
# Set Options for the Db2 Magic Commands
#
def setOptions(inSQL):
global _settings, _display
cParms = inSQL.split()
cnt = 0
if (len(cParms) == 1):
print("(MAXROWS) Maximum number of rows displayed: " + str(_settings.get("maxrows",10)))
print("(MAXGRID) Maximum grid display size: " + str(_settings.get("maxgrid",5)))
print("(DISPLAY) Use PANDAS or GRID display format for output: " + _settings.get("display","PANDAS"))
print("(THREADS) Maximum number of threads to use when running SQL: " + str(_settings.get("threads",0)))
return
while cnt < len(cParms):
if cParms[cnt][0] == "?":
print("%sql OPTION MAXROWS n MAXGRID n DISPLAY n THREADS n")
print("LIST - List the current option settings")
print("MAXROWS n - The maximum number of rows displayed when returning results")
print("MAXGRID n - Maximum size of a scrollable GRID window")
print("THREADS n - Maximum number of parallel threads to use when running SQL")
return
if cParms[cnt].upper() == 'MAXROWS':
if cnt+1 < len(cParms):
try:
_settings["maxrows"] = int(cParms[cnt+1])
if (_settings["maxrows"] > 100 or _settings["maxrows"] <= 0):
_settings["maxrows"] = 100
except Exception as err:
errormsg("Invalid MAXROWS value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'MAXGRID':
if cnt+1 < len(cParms):
try:
maxgrid = int(cParms[cnt+1])
if (maxgrid <= 5): # Minimum window size is 5
maxgrid = 5
_display["maxVisibleRows"] = int(maxgrid)
try:
qgrid.set_defaults(grid_options=_display)
_settings["maxgrid"] = maxgrid
except:
_environment['qgrid'] = False
except Exception as err:
errormsg("Invalid MAXGRID value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXGRID option.")
return
elif cParms[cnt].upper() == 'DISPLAY':
if cnt+1 < len(cParms):
if (cParms[cnt+1].upper() == 'GRID'):
_settings["display"] = 'GRID'
elif (cParms[cnt+1].upper() == 'PANDAS'):
_settings["display"] = 'PANDAS'
else:
errormsg("Invalid DISPLAY value provided.")
cnt = cnt + 1
else:
errormsg("No value provided for the DISPLAY option.")
return
elif cParms[cnt].upper() == 'THREADS':
if cnt+1 < len(cParms):
try:
threads = int(cParms[cnt+1])
if (threads < 0):
threads = 0
elif (threads > 12):
threads = 12
else:
pass
_settings["threads"] = threads
except Exception as err:
errormsg("Invalid THREADS value provided.")
pass
cnt = cnt + 1
else:
errormsg("No thread count specified for the THREADS option.")
return
elif (cParms[cnt].upper() == 'LIST'):
print("(MAXROWS) Maximum number of rows displayed: " + str(_settings.get("maxrows",10)))
print("(MAXGRID) Maximum grid display size: " + str(_settings.get("maxgrid",5)))
print("(DISPLAY) Use PANDAS or GRID display format for output: " + _settings.get("display","PANDAS"))
print("(THREADS) Maximum number of threads to use when running SQL: " + str(_settings.get("threads",0)))
return
else:
cnt = cnt + 1
save_settings()
print("(MAXROWS) Maximum number of rows displayed: " + str(_settings.get("maxrows",10)))
print("(MAXGRID) Maximum grid display size: " + str(_settings.get("maxgrid",5)))
print("(DISPLAY) Use PANDAS or GRID display format for output: " + _settings.get("display","PANDAS"))
print("(THREADS) Maximum number of threads to use when running SQL: " + str(_settings.get("threads",0)))
return
#
# Display help (link to documentation)
#
def sqlhelp():
global _environment
print("Db2 Magic Documentation: https://ibm.github.io/db2-jupyter/")
return
# Split port and IP addresses
def split_string(in_port,splitter=":"):
# Split input into an IP address and Port number
global _settings
checkports = in_port.split(splitter)
ip = checkports[0]
if (len(checkports) > 1):
port = checkports[1]
else:
port = None
return ip, port
# Parse the CONNECT statement and execute if possible
def parseConnect(inSQL,local_ns):
global _settings, _connected
_connected = False
cParms = inSQL.split()
cnt = 0
_settings["ssl"] = ""
while cnt < len(cParms):
if cParms[cnt].upper() == 'TO':
if cnt+1 < len(cParms):
_settings["database"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No database specified in the CONNECT statement")
return
elif cParms[cnt].upper() == "SSL":
if cnt+1 < len(cParms):
if (cParms[cnt+1].upper() in ("ON","TRUE")):
_settings["ssl"] = "SECURITY=SSL;"
elif (cParms[cnt+1].upper() in ("OFF","FALSE")):
_settings["ssl"] = ""
elif (cParms[cnt+1] != ""):
cert = cParms[cnt+1]
_settings["ssl"] = "Security=SSL;SSLServerCertificate={cert};"
cnt = cnt + 1
else:
errormsg("No setting provided for the SSL option (ON | OFF | TRUE | FALSE | certifcate)")
return
elif cParms[cnt].upper() == 'CREDENTIALS':
if cnt+1 < len(cParms):
credentials = cParms[cnt+1]
if (credentials in local_ns):
tempid = eval(credentials,local_ns)
if (isinstance(tempid,dict) == False):
errormsg("The CREDENTIALS variable (" + credentials + ") does not contain a valid Python dictionary (JSON object)")
return
else:
tempid = None
if (tempid == None):
fname = credentials + ".pickle"
try:
with open(fname,'rb') as f:
_id = pickle.load(f)
except:
errormsg("Unable to find credential variable or file.")
return
else:
_id = tempid
try:
_settings["database"] = _id.get("db","")
_settings["hostname"] = _id.get("hostname","")
_settings["port"] = _id.get("port","50000")
_settings["uid"] = _id.get("username","")
_settings["pwd"] = _id.get("password","")
try:
fname = credentials + ".pickle"
with open(fname,'wb') as f:
pickle.dump(_id,f)
except:
errormsg("Failed trying to write Db2 Credentials.")
return
except:
errormsg("Credentials file is missing information. db/hostname/port/username/password required.")
return
else:
errormsg("No Credentials name supplied")
return
cnt = cnt + 1
elif cParms[cnt].upper() == 'USER':
if cnt+1 < len(cParms):
_settings["uid"] = cParms[cnt+1]
cnt = cnt + 1
else:
errormsg("No userid specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'USING':
if cnt+1 < len(cParms):
_settings["pwd"] = cParms[cnt+1]
if (_settings.get("pwd","?") == '?'):
_settings["pwd"] = getpass.getpass("Password [password]: ") or "password"
cnt = cnt + 1
else:
errormsg("No password specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'HOST':
if cnt+1 < len(cParms):
hostport = cParms[cnt+1]
ip, port = split_string(hostport)
if (port == None): _settings["port"] = "50000"
_settings["hostname"] = ip
cnt = cnt + 1
else:
errormsg("No hostname specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PORT':
if cnt+1 < len(cParms):
_settings["port"] = cParms[cnt+1]
cnt = cnt + 1
else:
errormsg("No port specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PASSTHRU':
if cnt+1 < len(cParms):
_settings["passthru"] = cParms[cnt+1]
cnt = cnt + 1
else:
errormsg("No passthru parameters specified in the CONNECT statement")
return
elif cParms[cnt].upper() in ('CLOSE','RESET') :
try:
result = ibm_db.close(_hdbc)
_hdbi.close()
except:
pass
success("Connection closed.")
if cParms[cnt].upper() == 'RESET':
_settings["database"] = ''
return
else:
cnt = cnt + 1
_ = db2_doConnect()
def db2_doConnect():
global _hdbc, _hdbi, _connected
global _settings
if _connected == False:
if len(_settings.get("database","")) == 0:
return False
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;ConnectTimeout=15;"
"UID={3};"
"PWD={4};{5};{6}").format(_settings.get("database",""),
_settings.get("hostname",""),
_settings.get("port","50000"),
_settings.get("uid",""),
_settings.get("pwd",""),
_settings.get("ssl",""),
_settings.get("passthru",""))
# Get a database handle (hdbc) and a statement handle (hstmt) for subsequent access to DB2
try:
_hdbc = ibm_db.connect(dsn, "", "")
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
try:
_hdbi = ibm_db_dbi.Connection(_hdbc)
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
_connected = True
# Save the values for future use
save_settings()
success("Connection successful.")
return True
def load_settings():
# This routine will load the settings from the previous session if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'rb') as f:
_settings = pickle.load(f)
_settings["maxgrid"] = 5
except:
pass
return
def save_settings():
# This routine will save the current settings if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'wb') as f:
pickle.dump(_settings,f)
except:
errormsg("Failed trying to write Db2 Configuration Information.")
return
def db2_error(quiet,connect=False):
global sqlerror, sqlcode, sqlstate, _environment
try:
if (connect == False):
errmsg = ibm_db.stmt_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
else:
errmsg = ibm_db.conn_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
sqlerror = errmsg
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
except:
errmsg = "Unknown error."
sqlcode = -99999
sqlstate = "-99999"
sqlerror = errmsg
return
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
if quiet == True: return
if (errmsg == ""): return
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html+errmsg+"</p>"))
else:
print(errmsg)
# Print out an error message
def errormsg(message):
global _environment
if (message != ""):
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + message + "</p>"))
else:
print(message)
def success(message):
if (message not in (None,"")):
print(message)
return
def debug(message,error=False):
global _environment
if (message in (None,"")):
return
if (_environment["jupyter"] == True):
spacer = "<br>" + " "
else:
spacer = "\n "
lines = message.split('\n')
msg = ""
indent = 0
for line in lines:
delta = line.count("(") - line.count(")")
if (msg == ""):
msg = line
indent = indent + delta
else:
if (delta < 0): indent = indent + delta
msg = msg + spacer * (indent*2) + line
if (delta > 0): indent = indent + delta
if (indent < 0): indent = 0
if (error == True):
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
else:
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#008000; background-color:#e6ffe6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + msg + "</pre></p>"))
else:
print(msg)
return
def setMacro(inSQL,parms):
global _macros
names = parms.split()
if (len(names) < 2):
errormsg("No command name supplied.")
return None
macroName = names[1].upper()
_macros[macroName] = inSQL # inSQL.replace("\t"," ")
return
def checkMacro(in_sql):
global _macros
if (len(in_sql) == 0): return(in_sql) # Nothing to do
tokens = parseArgs(in_sql,None) # Take the string and reduce into tokens
macro_name = tokens[0].upper() # Uppercase the name of the token
if (macro_name not in _macros):
return(in_sql) # No macro by this name so just return the string
result = runMacro(_macros[macro_name],in_sql,tokens) # Execute the macro using the tokens we found
return(result) # Runmacro will either return the original SQL or the new one
def splitassign(arg):
var_name = "null"
var_value = "null"
arg = arg.strip()
eq = arg.find("=")
if (eq != -1):
var_name = arg[:eq].strip()
temp_value = arg[eq+1:].strip()
if (temp_value != ""):
ch = temp_value[0]
if (ch in ["'",'"']):
if (temp_value[-1:] == ch):
var_value = temp_value[1:-1]
else:
var_value = temp_value
else:
var_value = temp_value
else:
var_value = arg
return var_name, var_value
def parseArgs(argin,_vars):
quoteChar = ""
blockChar = ""
inQuote = False
inBlock = False
inArg = True
args = []
arg = ''
for ch in argin.lstrip():
if (inBlock == True):
if (ch == ")"):
inBlock = False
arg = arg + ch
else:
arg = arg + ch
elif (inQuote == True):
if (ch == quoteChar):
inQuote = False
arg = arg + ch #z
else:
arg = arg + ch
elif (ch == "("): # Do we have a block
arg = arg + ch
inBlock = True
elif (ch == "\"" or ch == "\'"): # Do we have a quote
quoteChar = ch
arg = arg + ch #z
inQuote = True
elif (ch == " "):
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
else:
args.append("null")
arg = ""
else:
arg = arg + ch
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
return(args)
def runMacro(script,in_sql,tokens):
result = ""
runIT = True
code = script.split("\n")
level = 0
runlevel = [True,False,False,False,False,False,False,False,False,False]
ifcount = 0
flags = ""
_vars = {}
for i in range(0,len(tokens)):
vstr = str(i)
_vars[vstr] = tokens[i]
if (len(tokens) == 0):
_vars["argc"] = "0"
else:
_vars["argc"] = str(len(tokens)-1)
for line in code:
line = line.strip()
if (line == "" or line == "\n"): continue
if (line[0] == "#"): continue # A comment line starts with a # in the first position of the line
args = parseArgs(line,_vars) # Get all of the arguments
if (args[0] == "if"):
ifcount = ifcount + 1
if (runlevel[level] == False): # You can't execute this statement
continue
level = level + 1
if (len(args) < 4):
print("Macro: Incorrect number of arguments for the if clause.")
return in_sql
arg1 = args[1]
arg2 = args[3]
if (len(arg2) > 2):
ch1 = arg2[0]
ch2 = arg2[-1:]
if (ch1 in ['"',"'"] and ch1 == ch2):
arg2 = arg2[1:-1].strip()
op = args[2]
if (op in ["=","=="]):
if (arg1 == arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<=","=<"]):
if (arg1 <= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">=","=>"]):
if (arg1 >= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<>","!="]):
if (arg1 != arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<"]):
if (arg1 < arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">"]):
if (arg1 > arg2):
runlevel[level] = True
else:
runlevel[level] = False
else:
print("Macro: Unknown comparison operator in the if statement:" + op)
continue
elif (args[0] in ["exit","echo"] and runlevel[level] == True):
msg = ""
for msgline in args[1:]:
if (msg == ""):
msg = subvars(msgline,_vars)
else:
msg = msg + " " + subvars(msgline,_vars)
if (msg != ""):
if (args[0] == "echo"):
debug(msg,error=False)
else:
debug(msg,error=True)
if (args[0] == "exit"): return ''
elif (args[0] == "pass" and runlevel[level] == True):
pass
elif (args[0] == "flags" and runlevel[level] == True):
if (len(args) > 1):
for i in range(1,len(args)):
flags = flags + " " + args[i]
flags = flags.strip()
elif (args[0] == "var" and runlevel[level] == True):
value = ""
for val in args[2:]:
if (value == ""):
value = subvars(val,_vars)
else:
value = value + " " + subvars(val,_vars)
value.strip()
_vars[args[1]] = value
elif (args[0] == 'else'):
if (ifcount == level):
runlevel[level] = not runlevel[level]
elif (args[0] == 'return' and runlevel[level] == True):
return(f"{flags} {result}")
elif (args[0] == "endif"):
ifcount = ifcount - 1
if (ifcount < level):
level = level - 1
if (level < 0):
print("Macro: Unmatched if/endif pairs.")
return ''
else:
if (runlevel[level] == True):
if (result == ""):
result = subvars(line,_vars)
else:
result = result + "\n" + subvars(line,_vars)
return(f"{flags} {result}")
def subvars(script,_vars):
if (_vars == None): return script
remainder = script
result = ""
done = False
while done == False:
bv = remainder.find("{")
if (bv == -1):
done = True
continue
ev = remainder.find("}")
if (ev == -1):
done = True
continue
result = result + remainder[:bv]
vvar = remainder[bv+1:ev].strip()
remainder = remainder[ev+1:]
modifier = ""
if (len(vvar) == 0):
errormsg(f"No variable name supplied in the braces {{}}")
return script
upper = False
allvars = False
concat = " "
if (len(vvar) > 1):
modifier = vvar[0]
if (modifier == "^"):
upper = True
vvar = vvar[1:]
elif (modifier == "*"):
vvar = vvar[1:]
allvars = True
concat = " "
elif (vvar[0] == ","):
vvar = vvar[1:]
allvars = True
concat = ","
else:
pass
if (vvar in _vars):
if (upper == True):
items = _vars[vvar].upper()
elif (allvars == True):
try:
iVar = int(vvar)
except:
return(script)
items = ""
sVar = str(iVar)
while sVar in _vars:
if (items == ""):
items = _vars[sVar]
else:
items = items + concat + _vars[sVar]
iVar = iVar + 1
sVar = str(iVar)
else:
items = _vars[vvar]
else:
if (allvars == True):
items = ""
else:
items = "null"
result = result + items
if (remainder != ""):
result = result + remainder
return(result)
def splitargs(arguments):
import types
# String the string and remove the ( and ) characters if they at the beginning and end of the string
results = []
step1 = arguments.strip()
if (len(step1) == 0): return(results) # Not much to do here - no args found
if (step1[0] == '('):
if (step1[-1:] == ')'):
step2 = step1[1:-1]
step2 = step2.strip()
else:
step2 = step1
else:
step2 = step1
# Now we have a string without brackets. Start scanning for commas
quoteCH = ""
pos = 0
arg = ""
args = []
while pos < len(step2):
ch = step2[pos]
if (quoteCH == ""): # Are we in a quote?
if (ch in ('"',"'")): # Check to see if we are starting a quote
quoteCH = ch
arg = arg + ch
pos += 1
elif (ch == ","): # Are we at the end of a parameter?
arg = arg.strip()
args.append(arg)
arg = ""
inarg = False
pos += 1
else: # Continue collecting the string
arg = arg + ch
pos += 1
else:
if (ch == quoteCH): # Are we at the end of a quote?
arg = arg + ch # Add the quote to the string
pos += 1 # Increment past the quote
quoteCH = "" # Stop quote checking (maybe!)
else:
pos += 1
arg = arg + ch
if (quoteCH != ""): # So we didn't end our string
arg = arg.strip()
args.append(arg)
elif (arg != ""): # Something left over as an argument
arg = arg.strip()
args.append(arg)
else:
pass
results = []
for arg in args:
result = []
if (len(arg) > 0):
if (arg[0] in ('"',"'")):
value = arg[1:-1]
isString = True
isNumber = False
else:
isString = False
isNumber = False
try:
value = eval(arg)
if (type(value) == int):
isNumber = True
elif (isinstance(value,float) == True):
isNumber = True
else:
value = arg
except:
value = arg
else:
value = ""
isString = False
isNumber = False
result = [value,isString,isNumber]
results.append(result)
return results
def createDF(hdbc,hdbi,sqlin,local_ns):
import datetime
import ibm_db
global sqlcode, _settings, _parallel
NoDF = False
YesDF = True
if (hdbc == None or hdbi == None):
errormsg("You need to connect to a database before issuing this command.")
return NoDF, None
# Strip apart the command into tokens based on spaces
tokens = sqlin.split()
token_count = len(tokens)
if (token_count < 5): # Not enough parameters
errormsg("Insufficient arguments for USING command")
return NoDF, None
keyword_command = tokens[0].upper()
dfName = tokens[1]
keyword_create = tokens[2].upper()
keyword_table = tokens[3].upper()
table = tokens[4]
if (dfName not in local_ns):
errormsg("The variable ({dfName}) does not exist in the local variable list.")
return NoDF, None
try:
dfValue = eval(dfName,None,local_ns) # globals()[varName] # eval(varName)
except:
errormsg("The variable ({dfName}) does not contain a value.")
return NoDF, None
if (keyword_create in ("SELECT","WITH")):
if (_parallel == False):
errormsg("Parallelism is not availble on this system.")
return NoDF, None
thread_count = _settings.get("threads",0)
if (thread_count in (0,1)):
errormsg("The THREADS option is currently set to 0 or 1 which disables parallelism.")
return NoDF, None
ok, df = dfSQL(hdbc,hdbi,sqlin,dfName,dfValue,thread_count)
if (ok == False):
return NoDF, None
else:
return YesDF, df
if (isinstance(dfValue,pandas.DataFrame) == False): # Not a Pandas dataframe
errormsg("The variable ({dfName}) is not a Pandas dataframe.")
return NoDF, None
if (keyword_create not in ("CREATE","REPLACE","APPEND") or keyword_table != "TABLE"):
errormsg("Incorrect syntax: %sql using <df> create table <name> [options]")
return NoDF, None
if (token_count % 2 != 1):
errormsg("Insufficient arguments for USING command.")
return NoDF, None
flag_withdata = False
flag_asis = False
flag_float = False
flag_integer = False
limit = -1
for token_idx in range(5,token_count,2):
option_key = tokens[token_idx].upper()
option_val = tokens[token_idx+1].upper()
if (option_key == "WITH" and option_val == "DATA"):
flag_withdata = True
elif (option_key == "COLUMNS" and option_val == "ASIS"):
flag_asis = True
elif (option_key == "KEEP" and option_val == "FLOAT64"):
flag_float = True
elif (option_key == "KEEP" and option_val == "INT64"):
flag_integer = True
elif (option_key == "LIMIT"):
if (option_val.isnumeric() == False):
errormsg("The LIMIT must be a valid number from -1 (unlimited) to the maximun number of rows to insert")
return NoDF, None
limit = int(option_val)
else:
errormsg("Invalid options. Must be either WITH DATA | COLUMNS ASIS | KEEP FLOAT64 | KEEP FLOAT INT64")
return NoDF, None
if (keyword_create == "REPLACE"):
sql = f"DROP TABLE {table}"
ok = execSQL(hdbc,sql,quiet=True)
sql = []
columns = dict(dfValue.dtypes)
sql.append(f'CREATE TABLE {table} (')
datatypes = []
comma = ""
for column in columns:
datatype = str(columns[column])
datatype = datatype.upper()
if (datatype == "OBJECT"):
datapoint = dfValue[column][0]
if (isinstance(datapoint,datetime.datetime)):
type = "TIMESTAMP"
elif (isinstance(datapoint,datetime.time)):
type = "TIME"
elif (isinstance(datapoint,datetime.date)):
type = "DATE"
elif (isinstance(datapoint,float)):
if (flag_float == True):
type = "FLOAT"
else:
type = "DECFLOAT"
elif (isinstance(datapoint,int)):
if (flag_integer == True):
type = "BIGINT"
else:
type = "INTEGER"
elif (isinstance(datapoint,str)):
maxlength = dfValue[column].apply(str).apply(len).max()
type = f"VARCHAR({maxlength})"
else:
type = "CLOB"
elif (datatype == "INT64"):
type = "BIGINT"
elif (datatype == "INT32"):
type = "INT"
elif (datatype == "INT16"):
type = "SMALLINT"
elif (datatype == "FLOAT64"):
if (flag_float == True):
type = "FLOAT"
else:
type = "DECFLOAT"
elif (datatype == "FLOAT32"):
if (flag_float == True):
type = "REAL"
else:
type = "DECFLOAT"
elif ("DATETIME64" in datatype):
type = "TIMESTAMP"
elif (datatype == "BOOLEAN"):
type = "BINARY"
elif (datatype == "STRING"):
maxlength = dfValue[column].apply(str).apply(len).max()
type = f"VARCHAR({maxlength})"
else:
type = "CLOB"
datatypes.append(type)
if (flag_asis == False):
if (isinstance(column,str) == False):
column = str(column)
identifier = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_"
column_name = column.strip().upper()
new_name = ""
for ch in column_name:
if (ch not in identifier):
new_name = new_name + "_"
else:
new_name = new_name + ch
new_name = new_name.lstrip('_').rstrip('_')
if (new_name == "" or new_name[0] not in "ABCDEFGHIJKLMNOPQRSTUVWXYZ"):
new_name = f'"{column}"'
else:
new_name = f'"{column}"'
sql.append(f" {new_name} {type}")
sql.append(")")
sqlcmd = ""
for i in range(0,len(sql)):
if (i > 0 and i < len(sql)-2):
comma = ","
else:
comma = ""
sqlcmd = "{}\n{}{}".format(sqlcmd,sql[i],comma)
if (keyword_create != "APPEND"):
print(sqlcmd)
ok = execSQL(hdbc,sqlcmd,quiet=False)
if (ok == False):
return NoDF, None
if (flag_withdata == True or keyword_create == "APPEND"):
autocommit = ibm_db.autocommit(hdbc)
ibm_db.autocommit(hdbc,False)
row_count = 0
insert_sql = ""
rows, cols = dfValue.shape
for row in range(0,rows):
insert_row = ""
for col in range(0, cols):
value = dfValue.iloc[row][col]
value = str(value)
if (value.upper() in ("NAN","<NA>","NAT")):
value = "NULL"
else:
addquotes_flag = False
if (datatypes[col] == "CLOB" or "VARCHAR" in datatypes[col]):
addquotes_flag = True
elif (datatypes[col] in ("TIME","DATE","TIMESTAMP")):
addquotes_flag = True
elif (datatypes[col] in ("INTEGER","INT","SMALLINT","BIGINT","DECFLOAT","FLOAT","BINARY","REAL")):
addquotes_flag = False
else:
addquotes_flag = True
if (addquotes_flag == True):
value = addquotes(value,True)
if (insert_row == ""):
insert_row = f"{value}"
else:
insert_row = f"{insert_row},{value}"
if (insert_sql == ""):
insert_sql = f"INSERT INTO {table} VALUES ({insert_row})"
else:
insert_sql = f"{insert_sql},({insert_row})"
row_count += 1
if (row_count % 1000 == 0 or row_count == limit):
try:
result = ibm_db.exec_immediate(hdbc, insert_sql) # Run it
except:
db2_error(False)
return NoDF, None
ibm_db.commit(hdbc)
print(f"\r{row_count} of {rows} rows inserted.",end="")
insert_sql = ""
if (row_count == limit):
break
if (insert_sql != ""):
try:
result = ibm_db.exec_immediate(hdbc, insert_sql) # Run it
except:
db2_error(False)
return NoDF, None
ibm_db.commit(hdbc)
ibm_db.autocommit(hdbc,autocommit)
print("\nInsert completed.")
return NoDF, None
def sqlParser(sqlin,local_ns):
sql_cmd = ""
encoded_sql = sqlin
firstCommand = "(?:^\s*)([a-zA-Z]+)(?:\s+.*|$)"
findFirst = re.match(firstCommand,sqlin)
if (findFirst == None): # We did not find a match so we just return the empty string
return sql_cmd, encoded_sql
cmd = findFirst.group(1)
sql_cmd = cmd.upper()
#
# Scan the input string looking for variables in the format :var. If no : is found just return.
# Var must be alpha+number+_ to be valid
#
if (':' not in sqlin): # A quick check to see if parameters are in here, but not fool-proof!
return sql_cmd, encoded_sql
inVar = False
inQuote = ""
varName = ""
encoded_sql = ""
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
PANDAS = 5
for ch in sqlin:
if (inVar == True): # We are collecting the name of a variable
if (ch.upper() in "@_ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789[]"):
varName = varName + ch
continue
else:
if (varName == ""):
encode_sql = encoded_sql + ":"
elif (varName[0] in ('[',']')):
encoded_sql = encoded_sql + ":" + varName
else:
if (ch == '.'): # If the variable name is stopped by a period, assume no quotes are used
flag_quotes = False
else:
flag_quotes = True
varValue, varType = getContents(varName,flag_quotes,local_ns)
if (varType != PANDAS and varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == RAW):
encoded_sql = encoded_sql + varValue
elif (varType == PANDAS):
insertsql = ""
coltypes = varValue.dtypes
rows, cols = varValue.shape
for row in range(0,rows):
insertrow = ""
for col in range(0, cols):
value = varValue.iloc[row][col]
if (coltypes[col] == "object"):
value = str(value)
value = addquotes(value,True)
else:
strvalue = str(value)
if ("NAN" in strvalue.upper()):
value = "NULL"
if (insertrow == ""):
insertrow = f"{value}"
else:
insertrow = f"{insertrow},{value}"
if (insertsql == ""):
insertsql = f"({insertrow})"
else:
insertsql = f"{insertsql},({insertrow})"
encoded_sql = encoded_sql + insertsql
elif (varType == LIST):
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
flag_quotes = True
try:
if (v.find('0x') == 0): # Just guessing this is a hex value at beginning
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
encoded_sql = encoded_sql + ch
varName = ""
inVar = False
elif (inQuote != ""):
encoded_sql = encoded_sql + ch
if (ch == inQuote): inQuote = ""
elif (ch in ("'",'"')):
encoded_sql = encoded_sql + ch
inQuote = ch
elif (ch == ":"): # This might be a variable
varName = ""
inVar = True
else:
encoded_sql = encoded_sql + ch
if (inVar == True):
varValue, varType = getContents(varName,True,local_ns) # We assume the end of a line is quoted
if (varType != PANDAS and varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == RAW):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == PANDAS):
insertsql = ""
coltypes = varValue.dtypes
rows, cols = varValue.shape
for row in range(0,rows):
insertrow = ""
for col in range(0, cols):
value = varValue.iloc[row][col]
if (coltypes[col] == "object"):
value = str(value)
value = addquotes(value,True)
else:
strvalue = str(value)
if ("NAN" in strvalue.upper()):
value = "NULL"
if (insertrow == ""):
insertrow = f"{value}"
else:
insertrow = f"{insertrow},{value}"
if (insertsql == ""):
insertsql = f"({insertrow})"
else:
insertsql = f"{insertsql},({insertrow})"
encoded_sql = encoded_sql + insertsql
elif (varType == LIST):
flag_quotes = True
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
try:
if (v.find('0x') == 0): # Just guessing this is a hex value
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
return sql_cmd, encoded_sql
def plotData(hdbi, sql):
try:
df = pandas.read_sql(sql,hdbi)
except Exception as err:
db2_error(False)
return
if df.empty:
errormsg("No results returned")
return
col_count = len(df.columns)
if flag(["-pb","-bar"]): # Plot 1 = bar chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='bar');
_ = plt.plot();
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
df.plot(kind='bar',x=xlabel,y=ylabel);
_ = plt.plot();
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot.bar();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pp","-pie"]): # Plot 2 = pie chart
if (col_count in (1,2)):
if (col_count == 1):
df.index = df.index + 1
yname = df.columns.values[0]
_ = df.plot(kind='pie',y=yname);
else:
xlabel = df.columns.values[0]
xname = df[xlabel].tolist()
yname = df.columns.values[1]
_ = df.plot(kind='pie',y=yname,labels=xname);
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pl","-line"]): # Plot 3 = line chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='line');
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
_ = df.plot(kind='line',x=xlabel,y=ylabel) ;
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot();
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
else:
return
def getContents(varName,flag_quotes,local_ns):
#
# Get the contents of the variable name that is passed to the routine. Only simple
# variables are checked, i.e. arrays and lists are not parsed
#
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
DICT = 4
PANDAS = 5
try:
value = eval(varName,None,local_ns) # globals()[varName] # eval(varName)
except:
return(None,STRING)
if (isinstance(value,dict) == True): # Check to see if this is JSON dictionary
return(addquotes(value,flag_quotes),STRING)
elif(isinstance(value,list) == True or isinstance(value,tuple) == True): # List - tricky
return(value,LIST)
elif (isinstance(value,pandas.DataFrame) == True): # Pandas dataframe
return(value,PANDAS)
elif (isinstance(value,int) == True): # Integer value
return(value,NUMBER)
elif (isinstance(value,float) == True): # Float value
return(value,NUMBER)
else:
try:
# The pattern needs to be in the first position (0 in Python terms)
if (value.find('0x') == 0): # Just guessing this is a hex value
return(value,RAW)
else:
return(addquotes(value,flag_quotes),STRING) # String
except:
return(addquotes(str(value),flag_quotes),RAW)
def addquotes(inString,flag_quotes):
if (isinstance(inString,dict) == True): # Check to see if this is JSON dictionary
serialized = json.dumps(inString)
else:
serialized = inString
# Replace single quotes with '' (two quotes) and wrap everything in single quotes
if (flag_quotes == False):
return(serialized)
else:
return("'"+serialized.replace("'","''")+"'") # Convert single quotes to two single quotes
def checkOption(args_in, option, vFalse=False, vTrue=True):
args_out = args_in.strip()
found = vFalse
if (args_out != ""):
if (args_out.find(option) >= 0):
args_out = args_out.replace(option," ")
args_out = args_out.strip()
found = vTrue
return args_out, found
def findProc(procname):
global _hdbc, _hdbi, _connected
# Split the procedure name into schema.procname if appropriate
upper_procname = procname.upper()
schema, proc = split_string(upper_procname,".") # Expect schema.procname
if (proc == None):
proc = schema
# Call ibm_db.procedures to see if the procedure does exist
schema = "%"
try:
stmt = ibm_db.procedures(_hdbc, None, schema, proc)
if (stmt == False): # Error executing the code
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
result = ibm_db.fetch_tuple(stmt)
resultsets = result[5]
if (resultsets >= 1): resultsets = 1
return resultsets
except Exception as err:
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
def parseCallArgs(macro):
quoteChar = ""
inQuote = False
inParm = False
ignore = False
name = ""
parms = []
parm = ''
sqlin = macro.replace("\n","")
sqlin.lstrip()
for ch in sqlin:
if (inParm == False):
# We hit a blank in the name, so ignore everything after the procedure name until a ( is found
if (ch == " "):
ignore == True
elif (ch == "("): # Now we have parameters to send to the stored procedure
inParm = True
else:
if (ignore == False): name = name + ch # The name of the procedure (and no blanks)
else:
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
else:
parm = parm + ch
elif (ch in ("\"","\'","[")): # Do we have a quote
if (ch == "["):
quoteChar = "]"
else:
quoteChar = ch
inQuote = True
elif (ch == ")"):
if (parm != ""):
parms.append(parm)
parm = ""
break
elif (ch == ","):
if (parm != ""):
parms.append(parm)
else:
parms.append("null")
parm = ""
else:
parm = parm + ch
if (inParm == True):
if (parm != ""):
parms.append(parm)
return(name,parms)
def getColumns(stmt):
columns = []
types = []
colcount = 0
try:
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
precision = ibm_db.field_precision(stmt,colcount)
while (colname != False):
if (coltype == "real"):
if (precision == 7):
coltype = "real"
elif (precision == 15):
coltype = "float"
elif (precision == 16):
coltype = "decfloat16"
elif (precision == 34):
coltype = "decfloat34"
else:
coltype = "real"
elif (coltype == "int"):
if (precision == 1):
coltype = "boolean"
elif (precision == 5):
coltype = "smallint"
elif (precision == 10):
coltype = "int"
else:
coltype = "int"
columns.append(colname)
types.append(coltype)
colcount += 1
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
precision = ibm_db.field_precision(stmt,colcount)
return columns,types
except Exception as err:
db2_error(False)
return None
def parseCall(hdbc, inSQL, local_ns):
global _hdbc, _hdbi, _connected, _environment
# Check to see if we are connected first
if (_connected == False): # Check if you are connected
db2_doConnect()
if _connected == False: return None
remainder = inSQL.strip()
procName, procArgs = parseCallArgs(remainder[5:]) # Assume that CALL ... is the format
resultsets = findProc(procName)
if (resultsets == None): return None
argvalues = []
if (len(procArgs) > 0): # We have arguments to consider
for arg in procArgs:
varname = arg
if (len(varname) > 0):
if (varname[0] == ":"):
checkvar = varname[1:]
varvalue = getContents(checkvar,True,local_ns)
if (varvalue == None):
errormsg("Variable " + checkvar + " is not defined.")
return None
argvalues.append(varvalue)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
else:
argvalues.append(None)
try:
if (len(procArgs) > 0):
argtuple = tuple(argvalues)
result = ibm_db.callproc(_hdbc,procName,argtuple)
stmt = result[0]
else:
result = ibm_db.callproc(_hdbc,procName)
stmt = result
if (resultsets != 0 and stmt != None):
columns, types = getColumns(stmt)
if (columns == None): return None
rows = []
rowlist = ibm_db.fetch_tuple(stmt)
while ( rowlist ) :
row = []
colcount = 0
for col in rowlist:
try:
if (types[colcount] in ["int","bigint"]):
row.append(int(col))
elif (types[colcount] in ["decimal","real"]):
row.append(float(col))
elif (types[colcount] in ["date","time","timestamp"]):
row.append(str(col))
else:
row.append(col)
except:
row.append(col)
colcount += 1
rows.append(row)
rowlist = ibm_db.fetch_tuple(stmt)
if flag(["-r","-array"]):
rows.insert(0,columns)
if len(procArgs) > 0:
allresults = []
allresults.append(rows)
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return rows
else:
df = pandas.DataFrame.from_records(rows,columns=columns)
if flag("-grid") or _settings.get('display',"PANDAS") == 'GRID':
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings.get("maxrows",10) == -1 : # All of the rows
with pandas.option_context('display.max_rows', 100, 'display.max_columns', None):
pdisplay(df)
else:
return df
else:
if len(procArgs) > 0:
allresults = []
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return None
except Exception as err:
db2_error(False)
return None
def parsePExec(hdbc, inSQL):
import ibm_db
global _stmt, _stmtID, _stmtSQL, sqlcode
cParms = inSQL.split()
parmCount = len(cParms)
if (parmCount == 0): return(None) # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "PREPARE"): # Prepare the following SQL
uSQL = inSQL.upper()
found = uSQL.find("PREPARE")
sql = inSQL[found+7:].strip()
try:
pattern = "\?\*[0-9]+"
findparm = re.search(pattern,sql)
while findparm != None:
found = findparm.group(0)
count = int(found[2:])
markers = ('?,' * count)[:-1]
sql = sql.replace(found,markers)
findparm = re.search(pattern,sql)
stmt = ibm_db.prepare(hdbc,sql) # Check error code here
if (stmt == False):
db2_error(False)
return(False)
stmttext = str(stmt).strip()
stmtID = stmttext[33:48].strip()
if (stmtID in _stmtID) == False:
_stmt.append(stmt) # Prepare and return STMT to caller
_stmtID.append(stmtID)
else:
stmtIX = _stmtID.index(stmtID)
_stmt[stmtIX] = stmt
return(stmtID)
except Exception as err:
print(err)
db2_error(False)
return(False)
if (keyword == "EXECUTE"): # Execute the prepare statement
if (parmCount < 2): return(False) # No stmtID available
stmtID = cParms[1].strip()
if (stmtID in _stmtID) == False:
errormsg("Prepared statement not found or invalid.")
return(False)
stmtIX = _stmtID.index(stmtID)
stmt = _stmt[stmtIX]
try:
if (parmCount == 2): # Only the statement handle available
result = ibm_db.execute(stmt) # Run it
elif (parmCount == 3): # Not quite enough arguments
errormsg("Missing or invalid USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
else:
using = cParms[2].upper()
if (using != "USING"): # Bad syntax again
errormsg("Missing USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
uSQL = inSQL.upper()
found = uSQL.find("USING")
parmString = inSQL[found+5:].strip()
parmset = splitargs(parmString)
if (len(parmset) == 0):
errormsg("Missing parameters after the USING clause.")
sqlcode = -99999
return(False)
parm_count = 0
parms = []
parms.append(None)
CONSTANT = 0
VARIABLE = 1
const = [0]
const_cnt = 0
for v in parmset:
parm_count = parm_count + 1
parms.append(None)
if (v[1] == True or v[2] == True): # v[1] true if string, v[2] true if num
parm_type = CONSTANT
const_cnt = const_cnt + 1
if (v[2] == True):
if (isinstance(v[0],int) == True): # Integer value
sql_type = ibm_db.SQL_INTEGER
elif (isinstance(v[0],float) == True): # Float value
sql_type = ibm_db.SQL_DOUBLE
else:
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
const.append(v[0])
else:
parm_type = VARIABLE
# See if the variable has a type associated with it varname@type
varset = v[0].split("@")
parm_name = varset[0]
parm_datatype = "char"
# Does the variable exist?
if (parm_name not in globals()):
errormsg("SQL Execute parameter " + parm_name + " not found")
sqlcode = -99999
return(False)
parms[parm_count] = globals()[parm_name]
if (len(varset) > 1): # Type provided
parm_datatype = varset[1]
if (parm_datatype == "dec" or parm_datatype == "decimal"):
sql_type = ibm_db.SQL_DOUBLE
elif (parm_datatype == "bin" or parm_datatype == "binary"):
sql_type = ibm_db.SQL_BINARY
elif (parm_datatype == "int" or parm_datatype == "integer"):
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
parms[parm_count] = addquotes(parms[parm_count],False)
try:
if (parm_type == VARIABLE):
result = ibm_db.bind_param(stmt, parm_count, parms[parm_count], ibm_db.SQL_PARAM_INPUT, sql_type)
# result = ibm_db.bind_param(stmt, parm_count, globals()[parm_name], ibm_db.SQL_PARAM_INPUT, sql_type)
else:
result = ibm_db.bind_param(stmt, parm_count, const[const_cnt], ibm_db.SQL_PARAM_INPUT, sql_type)
except Exception as e:
print(repr(e))
result = False
if (result == False):
errormsg("SQL Bind on variable " + parm_name + " failed.")
sqlcode = -99999
return(False)
result = ibm_db.execute(stmt) # ,tuple(parms))
if (result == False):
errormsg("SQL Execute failed.")
return(False)
if (ibm_db.num_fields(stmt) == 0): return(True) # Command successfully completed
return(fetchResults(stmt))
except Exception as err:
db2_error(False)
return(False)
return(False)
return(False)
def fetchResults(stmt):
global sqlcode
rows = []
columns, types = getColumns(stmt)
# By default we assume that the data will be an array
is_array = True
# Check what type of data we want returned - array or json
if (flag(["-r","-array"]) == False):
# See if we want it in JSON format, if not it remains as an array
if (flag("-json") == True):
is_array = False
# Set column names to lowercase for JSON records
if (is_array == False):
columns = [col.lower() for col in columns] # Convert to lowercase for each of access
# First row of an array has the column names in it
if (is_array == True):
rows.append(columns)
result = ibm_db.fetch_tuple(stmt)
rowcount = 0
while (result):
rowcount += 1
if (is_array == True):
row = []
else:
row = {}
colcount = 0
for col in result:
try:
if (types[colcount] in ["int","bigint"]):
if (is_array == True):
row.append(int(col))
else:
row[columns[colcount]] = int(col)
elif (types[colcount] in ["decimal","real"]):
if (is_array == True):
row.append(float(col))
else:
row[columns[colcount]] = float(col)
elif (types[colcount] in ["date","time","timestamp"]):
if (is_array == True):
row.append(str(col))
else:
row[columns[colcount]] = str(col)
else:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
except:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
colcount += 1
rows.append(row)
result = ibm_db.fetch_tuple(stmt)
if (rowcount == 0):
sqlcode = 100
else:
sqlcode = 0
return rows
def parseCommit(sql):
global _hdbc, _hdbi, _connected, _stmt, _stmtID, _stmtSQL
if (_connected == False): return # Nothing to do if we are not connected
cParms = sql.split()
if (len(cParms) == 0): return # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "COMMIT"): # Commit the work that was done
try:
result = ibm_db.commit (_hdbc) # Commit the connection
if (len(cParms) > 1):
keyword = cParms[1].upper()
if (keyword == "HOLD"):
return
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "ROLLBACK"): # Rollback the work that was done
try:
result = ibm_db.rollback(_hdbc) # Rollback the connection
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "AUTOCOMMIT"): # Is autocommit on or off
if (len(cParms) > 1):
op = cParms[1].upper() # Need ON or OFF value
else:
return
try:
if (op == "OFF"):
ibm_db.autocommit(_hdbc, False)
elif (op == "ON"):
ibm_db.autocommit (_hdbc, True)
return
except Exception as err:
db2_error(False)
return
return
def setFlags(inSQL,reset=False):
global _flags
if (reset == True):
_flags = [] # Delete all of the current flag settings
pos = 0
end = len(inSQL)-1
inFlag = False
ignore = False
outSQL = ""
flag = ""
while (pos <= end):
ch = inSQL[pos]
if (ignore == True):
outSQL = outSQL + ch
else:
if (inFlag == True):
if (ch != " "):
flag = flag + ch
else:
_flags.append(flag)
inFlag = False
else:
if (ch == "-"):
flag = "-"
inFlag = True
elif (ch == ' '):
outSQL = outSQL + ch
else:
outSQL = outSQL + ch
ignore = True
pos += 1
if (inFlag == True):
_flags.append(flag)
return outSQL
def flag(inflag):
global _flags
if isinstance(inflag,list):
for x in inflag:
if (x in _flags):
return True
return False
else:
if (inflag in _flags):
return True
else:
return False
def execSQL(hdbc,sql,quiet=True):
success = True
try: # See if we have an answer set
stmt = ibm_db.prepare(hdbc,sql)
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(quiet)
success = False
except:
db2_error(quiet)
success = False
return success
def splitSQL(inputString, delimiter):
pos = 0
arg = ""
results = []
quoteCH = ""
inSQL = inputString.strip()
if (len(inSQL) == 0): return(results) # Not much to do here - no args found
while pos < len(inSQL):
ch = inSQL[pos]
pos += 1
if (ch in ('"',"'")): # Is this a quote characters?
arg = arg + ch # Keep appending the characters to the current arg
if (ch == quoteCH): # Is this quote character we are in
quoteCH = ""
elif (quoteCH == ""): # Create the quote
quoteCH = ch
else:
None
elif (quoteCH != ""): # Still in a quote
arg = arg + ch
elif (ch == delimiter): # Is there a delimiter?
results.append(arg)
arg = ""
else:
arg = arg + ch
if (arg != ""):
results.append(arg)
return(results)
def process_slice(connection, dfName, dfValue, pd_dtypes, sql, q, s):
import numpy as np
import pandas as pd
if (q.empty() == False): return None
if (isinstance(dfValue,list) == True or isinstance(dfValue,tuple) == True):
encoded_sql = ""
start = True
for v in dfValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,str) == True):
encoded_sql = encoded_sql + addquotes(v,True)
else:
encoded_sql = encoded_sql + str(v)
start = False
dfValue = encoded_sql
elif (isinstance(dfValue,str) == True):
dfValue = addquotes(dfValue,True)
else:
dfValue = str(dfValue)
if (q.empty() == False): return None
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;ConnectTimeout=15;"
"UID={3};"
"PWD={4};{5};{6}").format(_settings.get("database",""),
_settings.get("hostname",""),
_settings.get("port","50000"),
_settings.get("uid",""),
_settings.get("pwd",""),
_settings.get("ssl",""),
_settings.get("passthru",""))
# Get a database handle (hdbc) and a statement handle (hstmt) for subsequent access to Db2
try:
hdbc = ibm_db.connect(dsn, "", "")
except Exception as err:
try:
errmsg = ibm_db.conn_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
except:
errmsg = "Error attempting to retrieve error message"
q.put(errmsg)
return None
try:
hdbi = ibm_db_dbi.Connection(hdbc)
except Exception as err:
errmsg = "Connection error when connecting through DBI adapter."
q.put(errmsg)
return None
if (q.empty() == False): return None
# if (isinstance(dfValue,str) == True):
# dfValue = addquotes(dfValue,True)
# else:
# dfValue = str(dfValue)
protoSQL = sql.replace(f":{dfName}",dfValue)
s.put(protoSQL)
if (q.empty() == False): return None
try:
if (pd_dtypes != None):
df = pd.read_sql_query(protoSQL,hdbi,dtype=pd_dtypes)
else:
df = pd.read_sql_query(protoSQL,hdbi)
except:
try:
errmsg = ibm_db.stmt_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
ibm_db.close(hdbc)
except:
errmsg = "Error attempting to retrieve statement error message."
q.put(errmsg)
return None
if (q.empty() == False): return None
try:
ibm_db.close(hdbc)
except:
pass
return df
def dfSQL(hdbc,hdbi,sqlin,dfName,dfValue,thread_count):
import shlex
NoDF = False
YesDF = True
sqlin = " ".join(shlex.split(sqlin))
if (hdbc == None or hdbi == None or sqlin in (None, "")):
return NoDF,None
uSQLin = sqlin.upper()
select_location = uSQLin.find("SELECT")
with_location = uSQLin.find("WITH")
if (select_location == -1):
errormsg("SQL statement does not contain a SELECT statement.")
return NoDF, None
if (with_location != -1 and (with_location < select_location)):
keyword_location = with_location
else:
keyword_location = select_location
sql = sqlin[keyword_location:]
keyword_location = sql.find(f":{dfName}")
if (keyword_location == -1):
errormsg(f"The parallelism value ({dfName}) was not found in the SQL statement")
return NoDF, None
if (isinstance(dfValue,list) == False):
errormsg(f"The variable {dfName} is not an array or a list of values.")
return NoDF, None
# Create a prototype statement to make sure the SQL will run
protoValue = dfValue[0]
if (isinstance(protoValue,list) == True or isinstance(protoValue,tuple) == True):
if (len(protoValue) == 0):
errormsg(f"The variable {dfName} contains array values that are empty.")
return NoDF, None
protoValue = protoValue[0]
if (isinstance(protoValue,str) == True):
protoValue = addquotes(protoValue,True)
else:
protoValue = str(protoValue)
protoSQL = sql.replace(f":{dfName}",protoValue)
try:
stmt = ibm_db.prepare(hdbc,protoSQL)
if (ibm_db.num_fields(stmt) == 0):
errormsg("The SQL statement does not return an answer set.")
return NoDF, None
except Exception as err:
db2_error(False)
return NoDF, None
# Determine the datatypes for a Pandas dataframe if it is supported
pd_dtypes = None
if (_pandas_dtype == True):
pd_dtypes = None
columns, types = getColumns(stmt)
pd_dtypes={}
for idx, col in enumerate(columns):
try:
_dindex = _db2types.index(types[idx])
except:
_dindex = 0
pd_dtypes[col] = _pdtypes[_dindex]
if len(pd_dtypes.keys()) == 0:
pd_dtypes = None
pool = mp.Pool(processes=thread_count)
m = multiprocessing.Manager()
q = m.Queue()
tracesql = m.Queue()
try:
results = [pool.apply_async(process_slice, args=(_settings,dfName,x,pd_dtypes,sql,q,tracesql,)) for x in dfValue]
except Exception as err:
print(repr(err))
return NoDF, None
output=[]
badresults = False
for p in results:
try:
df = p.get()
if (isinstance(df,pandas.DataFrame) == True):
output.append(df)
else:
badresults = True
except Exception as err:
print(repr(err))
badresults = True
if flag(["-e","-echo"]):
while (tracesql.empty() == False):
debug(tracesql.get(),False)
if (badresults == True):
if (q.empty() == False):
errormsg(q.get())
return NoDF, None
finaldf = pandas.concat(output)
finaldf.reset_index(drop=True, inplace=True)
if (len(finaldf) == 0):
sqlcode = 100
errormsg("No rows found")
return NoDF, None
return YesDF, finaldf
@magics_class
class DB2(Magics):
@needs_local_scope
@line_cell_magic
def sql(self, line, cell=None, local_ns=None):
# Before we event get started, check to see if you have connected yet. Without a connection we
# can't do anything. You may have a connection request in the code, so if that is true, we run those,
# otherwise we connect immediately
# If your statement is not a connect, and you haven't connected, we need to do it for you
global _settings, _environment
global _hdbc, _hdbi, _connected, sqlstate, sqlerror, sqlcode, sqlelapsed
# If you use %sql (line) we just run the SQL. If you use %%SQL the entire cell is run.
flag_cell = False
flag_output = False
sqlstate = "0"
sqlerror = ""
sqlcode = 0
sqlelapsed = 0
start_time = time.time()
# Macros gets expanded before anything is done
SQL1 = line.replace("\n"," ").strip()
SQL1 = setFlags(SQL1,reset=True)
SQL1 = checkMacro(SQL1) # Update the SQL if any macros are in there
SQL1 = setFlags(SQL1)
SQL2 = cell
if SQL1 == "?" or flag(["-h","-help"]): # Are you asking for help
sqlhelp()
return
if len(SQL1) == 0 and SQL2 == None: return # Nothing to do here
# Check for help
sqlType,remainder = sqlParser(SQL1,local_ns) # What type of command do you have?
if (sqlType == "CONNECT"): # A connect request
parseConnect(SQL1,local_ns)
return
elif (sqlType == "USING"): # You want to use a dataframe to create a table?
pdReturn, df = createDF(_hdbc,_hdbi, SQL1,local_ns)
if (pdReturn == True):
if flag("-grid") or _settings.get('display',"PANDAS") == 'GRID': # Check to see if we can display the results
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', 100, 'display.max_columns', None):
print(df.to_string())
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings.get("maxrows",10) == -1 : # All of the rows
pandas.options.display.max_rows = 100
pandas.options.display.max_columns = None
return df # print(df.to_string())
else:
pandas.options.display.max_rows = _settings.get("maxrows",10)
pandas.options.display.max_columns = None
return df # pdisplay(df) # print(df.to_string())
else:
return
elif (sqlType == "DEFINE"): # Create a macro from the body
result = setMacro(SQL2,remainder)
return
elif (sqlType in ("OPTION","OPTIONS")):
setOptions(SQL1)
return
elif (sqlType == 'COMMIT' or sqlType == 'ROLLBACK' or sqlType == 'AUTOCOMMIT'):
parseCommit(remainder)
return
elif (sqlType == "PREPARE"):
pstmt = parsePExec(_hdbc, remainder)
return(pstmt)
elif (sqlType == "EXECUTE"):
result = parsePExec(_hdbc, remainder)
return(result)
elif (sqlType == "CALL"):
result = parseCall(_hdbc, remainder, local_ns)
return(result)
else:
pass
sql = SQL1
if (sql == ""): sql = SQL2
if (sql == ""): return # Nothing to do here
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
if _settings.get("maxrows",10) == -1: # Set the return result size
pandas.reset_option('display.max_rows')
else:
pandas.options.display.max_rows = _settings.get("maxrows",10)
runSQL = re.sub('.*?--.*$',"",sql,flags=re.M)
remainder = runSQL.replace("\n"," ")
if flag(["-d","-delim"]):
sqlLines = splitSQL(remainder,"@")
else:
sqlLines = splitSQL(remainder,";")
flag_cell = True
# For each line figure out if you run it as a command (db2) or select (sql)
for sqlin in sqlLines: # Run each command
sqlin = checkMacro(sqlin) # Update based on any macros
sqlType, sql = sqlParser(sqlin,local_ns) # Parse the SQL
if (sql.strip() == ""): continue
if flag(["-e","-echo"]):
debug(sql,False)
if flag(["-pb","-bar","-pp","-pie","-pl","-line"]): # We are plotting some results
plotData(_hdbi, sql) # Plot the data and return
return
try: # See if we have an answer set
stmt = ibm_db.prepare(_hdbc,sql)
if (ibm_db.num_fields(stmt) == 0): # No, so we just execute the code
start_time = time.time()
result = ibm_db.execute(stmt) # Run it
sqlelapsed = time.time() - start_time
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
continue
rowcount = ibm_db.num_rows(stmt)
if (rowcount == 0 and flag(["-q","-quiet"]) == False):
errormsg("No rows found.")
continue # Continue running
elif flag(["-r","-array","-j","-json"]): # raw, json, format json
row_count = 0
resultSet = []
try:
start_time = time.time()
result = ibm_db.execute(stmt) # Run it
sqlelapsed = time.time() - start_time
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
return
if flag("-j"): # JSON single output
row_count = 0
json_results = []
while( ibm_db.fetch_row(stmt) ):
row_count = row_count + 1
jsonVal = ibm_db.result(stmt,0)
jsonDict = json.loads(jsonVal)
json_results.append(jsonDict)
flag_output = True
if (row_count == 0): sqlcode = 100
return(json_results)
else:
return(fetchResults(stmt))
except Exception as err:
db2_error(flag(["-q","-quiet"]))
return
else:
# New for pandas 1.3. We can coerce the PD datatypes to mimic those of Db2
pd_dtypes = None
if (_pandas_dtype == True):
pd_dtypes = None
columns, types = getColumns(stmt)
pd_dtypes={}
for idx, col in enumerate(columns):
try:
_dindex = _db2types.index(types[idx])
except:
_dindex = 0
pd_dtypes[col] = _pdtypes[_dindex]
if len(pd_dtypes.keys()) == 0:
pd_dtypes = None
try:
start_time = time.time()
if (_pandas_dtype == True):
df = pandas.read_sql_query(sql,_hdbi,dtype=pd_dtypes)
else:
df = pandas.read_sql_query(sql,_hdbi)
sqlelapsed = time.time() - start_time
except Exception as err:
sqlelapsed = 0
db2_error(False)
return
if (len(df) == 0):
sqlcode = 100
if (flag(["-q","-quiet"]) == False):
errormsg("No rows found")
continue
flag_output = True
if flag("-grid") or _settings.get('display',"PANDAS") == 'GRID': # Check to see if we can display the results
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
print(df.to_string())
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings.get("maxrows",10) == -1 : # All of the rows
pandas.options.display.max_rows = 100
pandas.options.display.max_columns = None
return df # print(df.to_string())
else:
pandas.options.display.max_rows = _settings.get("maxrows",10)
pandas.options.display.max_columns = None
return df # pdisplay(df) # print(df.to_string())
except:
db2_error(flag(["-q","-quiet"]))
continue # return
sqlelapsed = time.time() - start_time
if (flag_output == False and flag(["-q","-quiet"]) == False): print("Command completed.")
# Register the Magic extension in Jupyter
ip = get_ipython()
ip.register_magics(DB2)
load_settings()
macro_list = '''
#
# The LIST macro is used to list all of the tables in the current schema or for all schemas
#
var syntax Syntax: LIST TABLES [FOR ALL | FOR SCHEMA name]
#
# Only LIST TABLES is supported by this macro
#
flags -a
if {^1} <> 'TABLES'
exit {syntax}
endif
#
# This SQL is a temporary table that contains the description of the different table types
#
WITH TYPES(TYPE,DESCRIPTION) AS (
VALUES
('A','Alias'),
('G','Created temporary table'),
('H','Hierarchy table'),
('L','Detached table'),
('N','Nickname'),
('S','Materialized query table'),
('T','Table'),
('U','Typed table'),
('V','View'),
('W','Typed view')
)
SELECT TABNAME, TABSCHEMA, T.DESCRIPTION FROM SYSCAT.TABLES S, TYPES T
WHERE T.TYPE = S.TYPE
#
# Case 1: No arguments - LIST TABLES
#
if {argc} == 1
AND OWNER = CURRENT USER
ORDER BY TABNAME, TABSCHEMA
return
endif
#
# Case 2: Need 3 arguments - LIST TABLES FOR ALL
#
if {argc} == 3
if {^2}&{^3} == 'FOR&ALL'
ORDER BY TABNAME, TABSCHEMA
return
endif
exit {syntax}
endif
#
# Case 3: Need FOR SCHEMA something here
#
if {argc} == 4
if {^2}&{^3} == 'FOR&SCHEMA'
AND TABSCHEMA = '{^4}'
ORDER BY TABNAME, TABSCHEMA
return
else
exit {syntax}
endif
endif
#
# Nothing matched - Error
#
exit {syntax}
'''
DB2.sql(None, "define LIST", cell=macro_list, local_ns=locals())
macro_describe = '''
#
# The DESCRIBE command can either use the syntax DESCRIBE TABLE <name> or DESCRIBE TABLE SELECT ...
#
var syntax Syntax: DESCRIBE [TABLE name | SELECT statement]
#
# Check to see what count of variables is... Must be at least 2 items DESCRIBE TABLE x or SELECT x
#
flags -a
if {argc} < 2
exit {syntax}
endif
CALL ADMIN_CMD('{*0}');
'''
DB2.sql(None,"define describe", cell=macro_describe, local_ns=locals())
create_sample = """
flags -d
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='DEPARTMENT' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE DEPARTMENT(DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6),ADMRDEPT CHAR(3) NOT NULL)');
EXECUTE IMMEDIATE('INSERT INTO DEPARTMENT VALUES
(''A00'',''SPIFFY COMPUTER SERVICE DIV.'',''000010'',''A00''),
(''B01'',''PLANNING'',''000020'',''A00''),
(''C01'',''INFORMATION CENTER'',''000030'',''A00''),
(''D01'',''DEVELOPMENT CENTER'',NULL,''A00''),
(''D11'',''MANUFACTURING SYSTEMS'',''000060'',''D01''),
(''D21'',''ADMINISTRATION SYSTEMS'',''000070'',''D01''),
(''E01'',''SUPPORT SERVICES'',''000050'',''A00''),
(''E11'',''OPERATIONS'',''000090'',''E01''),
(''E21'',''SOFTWARE SUPPORT'',''000100'',''E01''),
(''F22'',''BRANCH OFFICE F2'',NULL,''E01''),
(''G22'',''BRANCH OFFICE G2'',NULL,''E01''),
(''H22'',''BRANCH OFFICE H2'',NULL,''E01''),
(''I22'',''BRANCH OFFICE I2'',NULL,''E01''),
(''J22'',''BRANCH OFFICE J2'',NULL,''E01'')');
END IF;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='EMPLOYEE' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE EMPLOYEE(
EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1),
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3),
PHONENO CHAR(4),
HIREDATE DATE,
JOB CHAR(8),
EDLEVEL SMALLINT NOT NULL,
SEX CHAR(1),
BIRTHDATE DATE,
SALARY DECIMAL(9,2),
BONUS DECIMAL(9,2),
COMM DECIMAL(9,2)
)');
EXECUTE IMMEDIATE('INSERT INTO EMPLOYEE VALUES
(''000010'',''CHRISTINE'',''I'',''HAAS'' ,''A00'',''3978'',''1995-01-01'',''PRES '',18,''F'',''1963-08-24'',152750.00,1000.00,4220.00),
(''000020'',''MICHAEL'' ,''L'',''THOMPSON'' ,''B01'',''3476'',''2003-10-10'',''MANAGER '',18,''M'',''1978-02-02'',94250.00,800.00,3300.00),
(''000030'',''SALLY'' ,''A'',''KWAN'' ,''C01'',''4738'',''2005-04-05'',''MANAGER '',20,''F'',''1971-05-11'',98250.00,800.00,3060.00),
(''000050'',''JOHN'' ,''B'',''GEYER'' ,''E01'',''6789'',''1979-08-17'',''MANAGER '',16,''M'',''1955-09-15'',80175.00,800.00,3214.00),
(''000060'',''IRVING'' ,''F'',''STERN'' ,''D11'',''6423'',''2003-09-14'',''MANAGER '',16,''M'',''1975-07-07'',72250.00,500.00,2580.00),
(''000070'',''EVA'' ,''D'',''PULASKI'' ,''D21'',''7831'',''2005-09-30'',''MANAGER '',16,''F'',''2003-05-26'',96170.00,700.00,2893.00),
(''000090'',''EILEEN'' ,''W'',''HENDERSON'' ,''E11'',''5498'',''2000-08-15'',''MANAGER '',16,''F'',''1971-05-15'',89750.00,600.00,2380.00),
(''000100'',''THEODORE'' ,''Q'',''SPENSER'' ,''E21'',''0972'',''2000-06-19'',''MANAGER '',14,''M'',''1980-12-18'',86150.00,500.00,2092.00),
(''000110'',''VINCENZO'' ,''G'',''LUCCHESSI'' ,''A00'',''3490'',''1988-05-16'',''SALESREP'',19,''M'',''1959-11-05'',66500.00,900.00,3720.00),
(''000120'',''SEAN'' ,'' '',''O`CONNELL'' ,''A00'',''2167'',''1993-12-05'',''CLERK '',14,''M'',''1972-10-18'',49250.00,600.00,2340.00),
(''000130'',''DELORES'' ,''M'',''QUINTANA'' ,''C01'',''4578'',''2001-07-28'',''ANALYST '',16,''F'',''1955-09-15'',73800.00,500.00,1904.00),
(''000140'',''HEATHER'' ,''A'',''NICHOLLS'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''000150'',''BRUCE'' ,'' '',''ADAMSON'' ,''D11'',''4510'',''2002-02-12'',''DESIGNER'',16,''M'',''1977-05-17'',55280.00,500.00,2022.00),
(''000160'',''ELIZABETH'',''R'',''PIANKA'' ,''D11'',''3782'',''2006-10-11'',''DESIGNER'',17,''F'',''1980-04-12'',62250.00,400.00,1780.00),
(''000170'',''MASATOSHI'',''J'',''YOSHIMURA'' ,''D11'',''2890'',''1999-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',44680.00,500.00,1974.00),
(''000180'',''MARILYN'' ,''S'',''SCOUTTEN'' ,''D11'',''1682'',''2003-07-07'',''DESIGNER'',17,''F'',''1979-02-21'',51340.00,500.00,1707.00),
(''000190'',''JAMES'' ,''H'',''WALKER'' ,''D11'',''2986'',''2004-07-26'',''DESIGNER'',16,''M'',''1982-06-25'',50450.00,400.00,1636.00),
(''000200'',''DAVID'' ,'' '',''BROWN'' ,''D11'',''4501'',''2002-03-03'',''DESIGNER'',16,''M'',''1971-05-29'',57740.00,600.00,2217.00),
(''000210'',''WILLIAM'' ,''T'',''JONES'' ,''D11'',''0942'',''1998-04-11'',''DESIGNER'',17,''M'',''2003-02-23'',68270.00,400.00,1462.00),
(''000220'',''JENNIFER'' ,''K'',''LUTZ'' ,''D11'',''0672'',''1998-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',49840.00,600.00,2387.00),
(''000230'',''JAMES'' ,''J'',''JEFFERSON'' ,''D21'',''2094'',''1996-11-21'',''CLERK '',14,''M'',''1980-05-30'',42180.00,400.00,1774.00),
(''000240'',''SALVATORE'',''M'',''MARINO'' ,''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''2002-03-31'',48760.00,600.00,2301.00),
(''000250'',''DANIEL'' ,''S'',''SMITH'' ,''D21'',''0961'',''1999-10-30'',''CLERK '',15,''M'',''1969-11-12'',49180.00,400.00,1534.00),
(''000260'',''SYBIL'' ,''P'',''JOHNSON'' ,''D21'',''8953'',''2005-09-11'',''CLERK '',16,''F'',''1976-10-05'',47250.00,300.00,1380.00),
(''000270'',''MARIA'' ,''L'',''PEREZ'' ,''D21'',''9001'',''2006-09-30'',''CLERK '',15,''F'',''2003-05-26'',37380.00,500.00,2190.00),
(''000280'',''ETHEL'' ,''R'',''SCHNEIDER'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1976-03-28'',36250.00,500.00,2100.00),
(''000290'',''JOHN'' ,''R'',''PARKER'' ,''E11'',''4502'',''2006-05-30'',''OPERATOR'',12,''M'',''1985-07-09'',35340.00,300.00,1227.00),
(''000300'',''PHILIP'' ,''X'',''SMITH'' ,''E11'',''2095'',''2002-06-19'',''OPERATOR'',14,''M'',''1976-10-27'',37750.00,400.00,1420.00),
(''000310'',''MAUDE'' ,''F'',''SETRIGHT'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''000320'',''RAMLAL'' ,''V'',''MEHTA'' ,''E21'',''9990'',''1995-07-07'',''FIELDREP'',16,''M'',''1962-08-11'',39950.00,400.00,1596.00),
(''000330'',''WING'' ,'' '',''LEE'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''M'',''1971-07-18'',45370.00,500.00,2030.00),
(''000340'',''JASON'' ,''R'',''GOUNOT'' ,''E21'',''5698'',''1977-05-05'',''FIELDREP'',16,''M'',''1956-05-17'',43840.00,500.00,1907.00),
(''200010'',''DIAN'' ,''J'',''HEMMINGER'' ,''A00'',''3978'',''1995-01-01'',''SALESREP'',18,''F'',''1973-08-14'',46500.00,1000.00,4220.00),
(''200120'',''GREG'' ,'' '',''ORLANDO'' ,''A00'',''2167'',''2002-05-05'',''CLERK '',14,''M'',''1972-10-18'',39250.00,600.00,2340.00),
(''200140'',''KIM'' ,''N'',''NATZ'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''200170'',''KIYOSHI'' ,'' '',''YAMAMOTO'' ,''D11'',''2890'',''2005-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',64680.00,500.00,1974.00),
(''200220'',''REBA'' ,''K'',''JOHN'' ,''D11'',''0672'',''2005-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',69840.00,600.00,2387.00),
(''200240'',''ROBERT'' ,''M'',''MONTEVERDE'',''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''1984-03-31'',37760.00,600.00,2301.00),
(''200280'',''EILEEN'' ,''R'',''SCHWARTZ'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1966-03-28'',46250.00,500.00,2100.00),
(''200310'',''MICHELLE'' ,''F'',''SPRINGER'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''200330'',''HELENA'' ,'' '',''WONG'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''F'',''1971-07-18'',35370.00,500.00,2030.00),
(''200340'',''ROY'' ,''R'',''ALONZO'' ,''E21'',''5698'',''1997-07-05'',''FIELDREP'',16,''M'',''1956-05-17'',31840.00,500.00,1907.00)');
END IF;
END"""
DB2.sql(None,"define sampledata", cell=create_sample, local_ns=locals())
create_set = '''
#
# Convert a SET statement into an OPTION statement
#
# Display settings
if {^1} == 'DISPLAY'
if {^2} == "PANDAS"
OPTION DISPLAY PANDAS
return
else
if {^2} == "GRID"
OPTION DISPLAY GRID
return
endif
endif
endif
# Multithreading
if {^1} == 'THREADS'
OPTION THREADS {2}
return
endif
# Maximum number of rows displayed
if {^1} == 'MAXROWS'
OPTION MAXROWS {2}
return
endif
# Maximum number of grid rows displayed
if {^1} == 'MAXGRID'
OPTION MAXGRID {2}
return
endif
{*0}
return
'''
DB2.sql(None,"define set", cell=create_set, local_ns=locals())
success("Db2 Extensions Loaded.")
###Output
Db2 Extensions Loaded.
###Markdown
DB2 Jupyter Notebook ExtensionsVersion: 2020-07-15 This code is imported as a Jupyter notebook extension in any notebooks you create with DB2 code in it. Place the following line of code in any notebook that you want to use these commands with:&37;run db2.ipynbThis code defines a Jupyter/Python magic command called `%sql` which allows you to execute DB2 specific calls to the database. There are other packages available for manipulating databases, but this one has been specificallydesigned for demonstrating a number of the SQL features available in DB2.There are two ways of executing the `%sql` command. A single line SQL statement would use theline format of the magic command:%sql SELECT * FROM EMPLOYEEIf you have a large block of sql then you would place the %%sql command at the beginning of the block and thenplace the SQL statements into the remainder of the block. Using this form of the `%%sql` statement means that thenotebook cell can only contain SQL and no other statements.%%sqlSELECT * FROM EMPLOYEEORDER BY LASTNAMEYou can have multiple lines in the SQL block (`%%sql`). The default SQL delimiter is the semi-column (`;`).If you have scripts (triggers, procedures, functions) that use the semi-colon as part of the script, you will need to use the `-d` option to change the delimiter to an at "`@`" sign. %%sql -dSELECT * FROM EMPLOYEE@CREATE PROCEDURE ...@The `%sql` command allows most DB2 commands to execute and has a special version of the CONNECT statement. A CONNECT by itself will attempt to reconnect to the database using previously used settings. If it cannot connect, it will prompt the user for additional information. The CONNECT command has the following format:%sql CONNECT TO <database> USER <userid> USING <password | ?> HOST <ip address> PORT <port number>If you use a "`?`" for the password field, the system will prompt you for a password. This avoids typing the password as clear text on the screen. If a connection is not successful, the system will print the errormessage associated with the connect request.If the connection is successful, the parameters are saved on your system and will be used the next time yourun a SQL statement, or when you issue the %sql CONNECT command with no parameters. In addition to the -d option, there are a number different options that you can specify at the beginning of the SQL: - `-d, -delim` - Change SQL delimiter to "`@`" from "`;`" - `-q, -quiet` - Quiet results - no messages returned from the function - `-r, -array` - Return the result set as an array of values instead of a dataframe - `-t, -time` - Time the following SQL statement and return the number of times it executes in 1 second - `-j` - Format the first character column of the result set as a JSON record - `-json` - Return result set as an array of json records - `-a, -all` - Return all rows in answer set and do not limit display - `-grid` - Display the results in a scrollable grid - `-pb, -bar` - Plot the results as a bar chart - `-pl, -line` - Plot the results as a line chart - `-pp, -pie` - Plot the results as a pie chart - `-e, -echo` - Any macro expansions are displayed in an output box - `-sampledata` - Create and load the EMPLOYEE and DEPARTMENT tablesYou can pass python variables to the `%sql` command by using the `{}` braces with the name of thevariable inbetween. Note that you will need to place proper punctuation around the variable in the event theSQL command requires it. For instance, the following example will find employee '000010' in the EMPLOYEE table.empno = '000010'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO='{empno}'The other option is to use parameter markers. What you would need to do is use the name of the variable with a colon in front of it and the program will prepare the statement and then pass the variable to Db2 when the statement is executed. This allows you to create complex strings that might contain quote characters and other special characters and not have to worry about enclosing the string with the correct quotes. Note that you do not place the quotes around the variable even though it is a string.empno = '000020'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=:empno Development SQLThe previous set of `%sql` and `%%sql` commands deals with SQL statements and commands that are run in an interactive manner. There is a class of SQL commands that are more suited to a development environment where code is iterated or requires changing input. The commands that are associated with this form of SQL are:- AUTOCOMMIT- COMMIT/ROLLBACK- PREPARE - EXECUTEAutocommit is the default manner in which SQL statements are executed. At the end of the successful completion of a statement, the results are commited to the database. There is no concept of a transaction where multiple DML/DDL statements are considered one transaction. The `AUTOCOMMIT` command allows you to turn autocommit `OFF` or `ON`. This means that the set of SQL commands run after the `AUTOCOMMIT OFF` command are executed are not commited to the database until a `COMMIT` or `ROLLBACK` command is issued.`COMMIT` (`WORK`) will finalize all of the transactions (`COMMIT`) to the database and `ROLLBACK` will undo all of the changes. If you issue a `SELECT` statement during the execution of your block, the results will reflect all of your changes. If you `ROLLBACK` the transaction, the changes will be lost.`PREPARE` is typically used in a situation where you want to repeatidly execute a SQL statement with different variables without incurring the SQL compilation overhead. For instance:```x = %sql PREPARE SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=?for y in ['000010','000020','000030']: %sql execute :x using :y````EXECUTE` is used to execute a previously compiled statement. To retrieve the error codes that might be associated with any SQL call, the following variables are updated after every call:* SQLCODE* SQLSTATE* SQLERROR - Full error message retrieved from Db2 Install Db2 Python DriverIf the ibm_db driver is not installed on your system, the subsequent Db2 commands will fail. In order to install the Db2 driver, issue the following command from a Jupyter notebook cell:```!pip install --user ibm_db``` Db2 Jupyter ExtensionsThis section of code has the import statements and global variables defined for the remainder of the functions.
###Code
#
# Set up Jupyter MAGIC commands "sql".
# %sql will return results from a DB2 select statement or execute a DB2 command
#
# IBM 2019: George Baklarz
# Version 2019-10-03
#
from __future__ import print_function
from IPython.display import HTML as pHTML, Image as pImage, display as pdisplay, Javascript as Javascript
from IPython.core.magic import (Magics, magics_class, line_magic,
cell_magic, line_cell_magic, needs_local_scope)
import ibm_db
import pandas
import ibm_db_dbi
import json
import matplotlib
import matplotlib.pyplot as plt
import getpass
import os
import pickle
import time
import sys
import re
import warnings
warnings.filterwarnings("ignore")
# Python Hack for Input between 2 and 3
try:
input = raw_input
except NameError:
pass
_settings = {
"maxrows" : 10,
"maxgrid" : 5,
"runtime" : 1,
"display" : "PANDAS",
"database" : "",
"hostname" : "localhost",
"port" : "50000",
"protocol" : "TCPIP",
"uid" : "DB2INST1",
"pwd" : "password",
"ssl" : ""
}
_environment = {
"jupyter" : True,
"qgrid" : True
}
_display = {
'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': False,
'defaultColumnWidth': 150,
'rowHeight': 28,
'enableColumnReorder': False,
'enableTextSelectionOnCells': True,
'editable': False,
'autoEdit': False,
'explicitInitialization': True,
'maxVisibleRows': 5,
'minVisibleRows': 5,
'sortable': True,
'filterable': False,
'highlightSelectedCell': False,
'highlightSelectedRow': True
}
# Connection settings for statements
_connected = False
_hdbc = None
_hdbi = None
_stmt = []
_stmtID = []
_stmtSQL = []
_vars = {}
_macros = {}
_flags = []
_debug = False
# Db2 Error Messages and Codes
sqlcode = 0
sqlstate = "0"
sqlerror = ""
sqlelapsed = 0
# Check to see if QGrid is installed
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
# Check if we are running in iPython or Jupyter
try:
if (get_ipython().config == {}):
_environment['jupyter'] = False
_environment['qgrid'] = False
else:
_environment['jupyter'] = True
except:
_environment['jupyter'] = False
_environment['qgrid'] = False
###Output
_____no_output_____
###Markdown
OptionsThere are four options that can be set with the **`%sql`** command. These options are shown below with the default value shown in parenthesis.- **`MAXROWS n (10)`** - The maximum number of rows that will be displayed before summary information is shown. If the answer set is less than this number of rows, it will be completely shown on the screen. If the answer set is larger than this amount, only the first 5 rows and last 5 rows of the answer set will be displayed. If you want to display a very large answer set, you may want to consider using the grid option `-g` to display the results in a scrollable table. If you really want to show all results then setting MAXROWS to -1 will return all output.- **`MAXGRID n (5)`** - The maximum size of a grid display. When displaying a result set in a grid `-g`, the default size of the display window is 5 rows. You can set this to a larger size so that more rows are shown on the screen. Note that the minimum size always remains at 5 which means that if the system is unable to display your maximum row size it will reduce the table display until it fits.- **`DISPLAY PANDAS | GRID (PANDAS)`** - Display the results as a PANDAS dataframe (default) or as a scrollable GRID- **`RUNTIME n (1)`** - When using the timer option on a SQL statement, the statement will execute for **`n`** number of seconds. The result that is returned is the number of times the SQL statement executed rather than the execution time of the statement. The default value for runtime is one second, so if the SQL is very complex you will need to increase the run time.- **`LIST`** - Display the current settingsTo set an option use the following syntax:```%sql option option_name value option_name value ....```The following example sets all options:```%sql option maxrows 100 runtime 2 display grid maxgrid 10```The values will **not** be saved between Jupyter notebooks sessions. If you need to retrieve the current options values, use the LIST command as the only argument:```%sql option list```
###Code
def setOptions(inSQL):
global _settings, _display
cParms = inSQL.split()
cnt = 0
while cnt < len(cParms):
if cParms[cnt].upper() == 'MAXROWS':
if cnt+1 < len(cParms):
try:
_settings["maxrows"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid MAXROWS value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'MAXGRID':
if cnt+1 < len(cParms):
try:
maxgrid = int(cParms[cnt+1])
if (maxgrid <= 5): # Minimum window size is 5
maxgrid = 5
_display["maxVisibleRows"] = int(cParms[cnt+1])
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
except Exception as err:
errormsg("Invalid MAXGRID value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'RUNTIME':
if cnt+1 < len(cParms):
try:
_settings["runtime"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid RUNTIME value provided.")
pass
cnt = cnt + 1
else:
errormsg("No value provided for the RUNTIME option.")
return
elif cParms[cnt].upper() == 'DISPLAY':
if cnt+1 < len(cParms):
if (cParms[cnt+1].upper() == 'GRID'):
_settings["display"] = 'GRID'
elif (cParms[cnt+1].upper() == 'PANDAS'):
_settings["display"] = 'PANDAS'
else:
errormsg("Invalid DISPLAY value provided.")
cnt = cnt + 1
else:
errormsg("No value provided for the DISPLAY option.")
return
elif (cParms[cnt].upper() == 'LIST'):
print("(MAXROWS) Maximum number of rows displayed: " + str(_settings["maxrows"]))
print("(MAXGRID) Maximum grid display size: " + str(_settings["maxgrid"]))
print("(RUNTIME) How many seconds to a run a statement for performance testing: " + str(_settings["runtime"]))
print("(DISPLAY) Use PANDAS or GRID display format for output: " + _settings["display"])
return
else:
cnt = cnt + 1
save_settings()
###Output
_____no_output_____
###Markdown
SQL HelpThe calling format of this routine is:```sqlhelp()```This code displays help related to the %sql magic command. This help is displayed when you issue a %sql or %%sql command by itself, or use the %sql -h flag.
###Code
def sqlhelp():
global _environment
if (_environment["jupyter"] == True):
sd = '<td style="text-align:left;">'
ed1 = '</td>'
ed2 = '</td>'
sh = '<th style="text-align:left;">'
eh1 = '</th>'
eh2 = '</th>'
sr = '<tr>'
er = '</tr>'
helpSQL = """
<h3>SQL Options</h3>
<p>The following options are available as part of a SQL statement. The options are always preceded with a
minus sign (i.e. -q).
<table>
{sr}
{sh}Option{eh1}{sh}Description{eh2}
{er}
{sr}
{sd}a, all{ed1}{sd}Return all rows in answer set and do not limit display{ed2}
{er}
{sr}
{sd}d{ed1}{sd}Change SQL delimiter to "@" from ";"{ed2}
{er}
{sr}
{sd}e, echo{ed1}{sd}Echo the SQL command that was generated after macro and variable substituion.{ed2}
{er}
{sr}
{sd}h, help{ed1}{sd}Display %sql help information.{ed2}
{er}
{sr}
{sd}j{ed1}{sd}Create a pretty JSON representation. Only the first column is formatted{ed2}
{er}
{sr}
{sd}json{ed1}{sd}Retrieve the result set as a JSON record{ed2}
{er}
{sr}
{sd}pb, bar{ed1}{sd}Plot the results as a bar chart{ed2}
{er}
{sr}
{sd}pl, line{ed1}{sd}Plot the results as a line chart{ed2}
{er}
{sr}
{sd}pp, pie{ed1}{sd}Plot Pie: Plot the results as a pie chart{ed2}
{er}
{sr}
{sd}q, quiet{ed1}{sd}Quiet results - no answer set or messages returned from the function{ed2}
{er}
{sr}
{sd}r, array{ed1}{sd}Return the result set as an array of values{ed2}
{er}
{sr}
{sd}sampledata{ed1}{sd}Create and load the EMPLOYEE and DEPARTMENT tables{ed2}
{er}
{sr}
{sd}t,time{ed1}{sd}Time the following SQL statement and return the number of times it executes in 1 second{ed2}
{er}
{sr}
{sd}grid{ed1}{sd}Display the results in a scrollable grid{ed2}
{er}
</table>
"""
else:
helpSQL = """
SQL Options
The following options are available as part of a SQL statement. Options are always
preceded with a minus sign (i.e. -q).
Option Description
a, all Return all rows in answer set and do not limit display
d Change SQL delimiter to "@" from ";"
e, echo Echo the SQL command that was generated after substitution
h, help Display %sql help information
j Create a pretty JSON representation. Only the first column is formatted
json Retrieve the result set as a JSON record
pb, bar Plot the results as a bar chart
pl, line Plot the results as a line chart
pp, pie Plot Pie: Plot the results as a pie chart
q, quiet Quiet results - no answer set or messages returned from the function
r, array Return the result set as an array of values
sampledata Create and load the EMPLOYEE and DEPARTMENT tables
t,time Time the SQL statement and return the execution count per second
grid Display the results in a scrollable grid
"""
helpSQL = helpSQL.format(**locals())
if (_environment["jupyter"] == True):
pdisplay(pHTML(helpSQL))
else:
print(helpSQL)
###Output
_____no_output_____
###Markdown
Connection HelpThe calling format of this routine is:```connected_help()```This code displays help related to the CONNECT command. This code is displayed when you issue a %sql CONNECT command with no arguments or you are running a SQL statement and there isn't any connection to a database yet.
###Code
def connected_help():
sd = '<td style="text-align:left;">'
ed = '</td>'
sh = '<th style="text-align:left;">'
eh = '</th>'
sr = '<tr>'
er = '</tr>'
if (_environment['jupyter'] == True):
helpConnect = """
<h3>Connecting to Db2</h3>
<p>The CONNECT command has the following format:
<p>
<pre>
%sql CONNECT TO <database> USER <userid> USING <password|?> HOST <ip address> PORT <port number> <SSL>
%sql CONNECT CREDENTIALS <varname>
%sql CONNECT CLOSE
%sql CONNECT RESET
%sql CONNECT PROMPT - use this to be prompted for values
</pre>
<p>
If you use a "?" for the password field, the system will prompt you for a password. This avoids typing the
password as clear text on the screen. If a connection is not successful, the system will print the error
message associated with the connect request.
<p>
The <b>CREDENTIALS</b> option allows you to use credentials that are supplied by Db2 on Cloud instances.
The credentials can be supplied as a variable and if successful, the variable will be saved to disk
for future use. If you create another notebook and use the identical syntax, if the variable
is not defined, the contents on disk will be used as the credentials. You should assign the
credentials to a variable that represents the database (or schema) that you are communicating with.
Using familiar names makes it easier to remember the credentials when connecting.
<p>
<b>CONNECT CLOSE</b> will close the current connection, but will not reset the database parameters. This means that
if you issue the CONNECT command again, the system should be able to reconnect you to the database.
<p>
<b>CONNECT RESET</b> will close the current connection and remove any information on the connection. You will need
to issue a new CONNECT statement with all of the connection information.
<p>
If the connection is successful, the parameters are saved on your system and will be used the next time you
run an SQL statement, or when you issue the %sql CONNECT command with no parameters.
<p>If you issue CONNECT RESET, all of the current values will be deleted and you will need to
issue a new CONNECT statement.
<p>A CONNECT command without any parameters will attempt to re-connect to the previous database you
were using. If the connection could not be established, the program to prompt you for
the values. To cancel the connection attempt, enter a blank value for any of the values. The connection
panel will request the following values in order to connect to Db2:
<table>
{sr}
{sh}Setting{eh}
{sh}Description{eh}
{er}
{sr}
{sd}Database{ed}{sd}Database name you want to connect to.{ed}
{er}
{sr}
{sd}Hostname{ed}
{sd}Use localhost if Db2 is running on your own machine, but this can be an IP address or host name.
{er}
{sr}
{sd}PORT{ed}
{sd}The port to use for connecting to Db2. This is usually 50000.{ed}
{er}
{sr}
{sd}SSL{ed}
{sd}If you are connecting to a secure port (50001) with SSL then you must include this keyword in the connect string.{ed}
{sr}
{sd}Userid{ed}
{sd}The userid to use when connecting (usually DB2INST1){ed}
{er}
{sr}
{sd}Password{ed}
{sd}No password is provided so you have to enter a value{ed}
{er}
</table>
"""
else:
helpConnect = """\
Connecting to Db2
The CONNECT command has the following format:
%sql CONNECT TO database USER userid USING password | ?
HOST ip address PORT port number SSL
%sql CONNECT CREDENTIALS varname
%sql CONNECT CLOSE
%sql CONNECT RESET
If you use a "?" for the password field, the system will prompt you for a password.
This avoids typing the password as clear text on the screen. If a connection is
not successful, the system will print the error message associated with the connect
request.
The CREDENTIALS option allows you to use credentials that are supplied by Db2 on
Cloud instances. The credentials can be supplied as a variable and if successful,
the variable will be saved to disk for future use. If you create another notebook
and use the identical syntax, if the variable is not defined, the contents on disk
will be used as the credentials. You should assign the credentials to a variable
that represents the database (or schema) that you are communicating with. Using
familiar names makes it easier to remember the credentials when connecting.
CONNECT CLOSE will close the current connection, but will not reset the database
parameters. This means that if you issue the CONNECT command again, the system
should be able to reconnect you to the database.
CONNECT RESET will close the current connection and remove any information on the
connection. You will need to issue a new CONNECT statement with all of the connection
information.
If the connection is successful, the parameters are saved on your system and will be
used the next time you run an SQL statement, or when you issue the %sql CONNECT
command with no parameters. If you issue CONNECT RESET, all of the current values
will be deleted and you will need to issue a new CONNECT statement.
A CONNECT command without any parameters will attempt to re-connect to the previous
database you were using. If the connection could not be established, the program to
prompt you for the values. To cancel the connection attempt, enter a blank value for
any of the values. The connection panel will request the following values in order
to connect to Db2:
Setting Description
Database Database name you want to connect to
Hostname Use localhost if Db2 is running on your own machine, but this can
be an IP address or host name.
PORT The port to use for connecting to Db2. This is usually 50000.
Userid The userid to use when connecting (usually DB2INST1)
Password No password is provided so you have to enter a value
SSL Include this keyword to indicate you are connecting via SSL (usually port 50001)
"""
helpConnect = helpConnect.format(**locals())
if (_environment['jupyter'] == True):
pdisplay(pHTML(helpConnect))
else:
print(helpConnect)
###Output
_____no_output_____
###Markdown
Prompt for Connection InformationIf you are running an SQL statement and have not yet connected to a database, the %sql command will prompt you for connection information. In order to connect to a database, you must supply:- Database name - Host name (IP address or name)- Port number- Userid- Password- Secure socketThe routine is called without any parameters:```connected_prompt()```
###Code
# Prompt for Connection information
def connected_prompt():
global _settings
_database = ''
_hostname = ''
_port = ''
_uid = ''
_pwd = ''
_ssl = ''
print("Enter the database connection details (Any empty value will cancel the connection)")
_database = input("Enter the database name: ");
if (_database.strip() == ""): return False
_hostname = input("Enter the HOST IP address or symbolic name: ");
if (_hostname.strip() == ""): return False
_port = input("Enter the PORT number: ");
if (_port.strip() == ""): return False
_ssl = input("Is this a secure (SSL) port (y or n)");
if (_ssl.strip() == ""): return False
if (_ssl == "n"):
_ssl = ""
else:
_ssl = "Security=SSL;"
_uid = input("Enter Userid on the DB2 system: ").upper();
if (_uid.strip() == ""): return False
_pwd = getpass.getpass("Password [password]: ");
if (_pwd.strip() == ""): return False
_settings["database"] = _database.strip()
_settings["hostname"] = _hostname.strip()
_settings["port"] = _port.strip()
_settings["uid"] = _uid.strip()
_settings["pwd"] = _pwd.strip()
_settings["ssl"] = _ssl.strip()
_settings["maxrows"] = 10
_settings["maxgrid"] = 5
_settings["runtime"] = 1
return True
# Split port and IP addresses
def split_string(in_port,splitter=":"):
# Split input into an IP address and Port number
global _settings
checkports = in_port.split(splitter)
ip = checkports[0]
if (len(checkports) > 1):
port = checkports[1]
else:
port = None
return ip, port
###Output
_____no_output_____
###Markdown
Connect Syntax ParserThe parseConnect routine is used to parse the CONNECT command that the user issued within the %sql command. The format of the command is:```parseConnect(inSQL)```The inSQL string contains the CONNECT keyword with some additional parameters. The format of the CONNECT command is one of:```CONNECT RESETCONNECT CLOSECONNECT CREDENTIALS CONNECT TO database USER userid USING password HOST hostname PORT portnumber ```If you have credentials available from Db2 on Cloud, place the contents of the credentials into a variable and then use the `CONNECT CREDENTIALS ` syntax to connect to the database.In addition, supplying a question mark (?) for password will result in the program prompting you for the password rather than having it as clear text in your scripts.When all of the information is checked in the command, the db2_doConnect function is called to actually do the connection to the database.
###Code
# Parse the CONNECT statement and execute if possible
def parseConnect(inSQL,local_ns):
global _settings, _connected
_connected = False
cParms = inSQL.split()
cnt = 0
_settings["ssl"] = ""
while cnt < len(cParms):
if cParms[cnt].upper() == 'TO':
if cnt+1 < len(cParms):
_settings["database"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No database specified in the CONNECT statement")
return
elif cParms[cnt].upper() == "SSL":
_settings["ssl"] = "Security=SSL;"
cnt = cnt + 1
elif cParms[cnt].upper() == 'CREDENTIALS':
if cnt+1 < len(cParms):
credentials = cParms[cnt+1]
tempid = eval(credentials,local_ns)
if (isinstance(tempid,dict) == False):
errormsg("The CREDENTIALS variable (" + credentials + ") does not contain a valid Python dictionary (JSON object)")
return
if (tempid == None):
fname = credentials + ".pickle"
try:
with open(fname,'rb') as f:
_id = pickle.load(f)
except:
errormsg("Unable to find credential variable or file.")
return
else:
_id = tempid
try:
_settings["database"] = _id["db"]
_settings["hostname"] = _id["hostname"]
_settings["port"] = _id["port"]
_settings["uid"] = _id["username"]
_settings["pwd"] = _id["password"]
try:
fname = credentials + ".pickle"
with open(fname,'wb') as f:
pickle.dump(_id,f)
except:
errormsg("Failed trying to write Db2 Credentials.")
return
except:
errormsg("Credentials file is missing information. db/hostname/port/username/password required.")
return
else:
errormsg("No Credentials name supplied")
return
cnt = cnt + 1
elif cParms[cnt].upper() == 'USER':
if cnt+1 < len(cParms):
_settings["uid"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No userid specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'USING':
if cnt+1 < len(cParms):
_settings["pwd"] = cParms[cnt+1]
if (_settings["pwd"] == '?'):
_settings["pwd"] = getpass.getpass("Password [password]: ") or "password"
cnt = cnt + 1
else:
errormsg("No password specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'HOST':
if cnt+1 < len(cParms):
hostport = cParms[cnt+1].upper()
ip, port = split_string(hostport)
if (port == None): _settings["port"] = "50000"
_settings["hostname"] = ip
cnt = cnt + 1
else:
errormsg("No hostname specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PORT':
if cnt+1 < len(cParms):
_settings["port"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No port specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PROMPT':
if (connected_prompt() == False):
print("Connection canceled.")
return
else:
cnt = cnt + 1
elif cParms[cnt].upper() in ('CLOSE','RESET') :
try:
result = ibm_db.close(_hdbc)
_hdbi.close()
except:
pass
success("Connection closed.")
if cParms[cnt].upper() == 'RESET':
_settings["database"] = ''
return
else:
cnt = cnt + 1
_ = db2_doConnect()
###Output
_____no_output_____
###Markdown
Connect to Db2The db2_doConnect routine is called when a connection needs to be established to a Db2 database. The command does not require any parameters since it relies on the settings variable which contains all of the information it needs to connect to a Db2 database.```db2_doConnect()```There are 4 additional variables that are used throughout the routines to stay connected with the Db2 database. These variables are:- hdbc - The connection handle to the database- hstmt - A statement handle used for executing SQL statements- connected - A flag that tells the program whether or not we are currently connected to a database- runtime - Used to tell %sql the length of time (default 1 second) to run a statement when timing itThe only database driver that is used in this program is the IBM DB2 ODBC DRIVER. This driver needs to be loaded on the system that is connecting to Db2. The Jupyter notebook that is built by this system installs the driver for you so you shouldn't have to do anything other than build the container.If the connection is successful, the connected flag is set to True. Any subsequent %sql call will check to see if you are connected and initiate another prompted connection if you do not have a connection to a database.
###Code
def db2_doConnect():
global _hdbc, _hdbi, _connected, _runtime
global _settings
if _connected == False:
if len(_settings["database"]) == 0:
return False
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;"
"UID={3};"
"PWD={4};{5}").format(_settings["database"],
_settings["hostname"],
_settings["port"],
_settings["uid"],
_settings["pwd"],
_settings["ssl"])
# Get a database handle (hdbc) and a statement handle (hstmt) for subsequent access to DB2
try:
_hdbc = ibm_db.connect(dsn, "", "")
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
try:
_hdbi = ibm_db_dbi.Connection(_hdbc)
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
_connected = True
# Save the values for future use
save_settings()
success("Connection successful.")
return True
###Output
_____no_output_____
###Markdown
Load/Save SettingsThere are two routines that load and save settings between Jupyter notebooks. These routines are called without any parameters.```load_settings() save_settings()```There is a global structure called settings which contains the following fields:```_settings = { "maxrows" : 10, "maxgrid" : 5, "runtime" : 1, "display" : "TEXT", "database" : "", "hostname" : "localhost", "port" : "50000", "protocol" : "TCPIP", "uid" : "DB2INST1", "pwd" : "password"}```The information in the settings structure is used for re-connecting to a database when you start up a Jupyter notebook. When the session is established for the first time, the load_settings() function is called to get the contents of the pickle file (db2connect.pickle, a Jupyter session file) that will be used for the first connection to the database. Whenever a new connection is made, the file is updated with the save_settings() function.
###Code
def load_settings():
# This routine will load the settings from the previous session if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'rb') as f:
_settings = pickle.load(f)
# Reset runtime to 1 since it would be unexpected to keep the same value between connections
_settings["runtime"] = 1
_settings["maxgrid"] = 5
except:
pass
return
def save_settings():
# This routine will save the current settings if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'wb') as f:
pickle.dump(_settings,f)
except:
errormsg("Failed trying to write Db2 Configuration Information.")
return
###Output
_____no_output_____
###Markdown
Error and Message FunctionsThere are three types of messages that are thrown by the %db2 magic command. The first routine will print out a success message with no special formatting:```success(message)```The second message is used for displaying an error message that is not associated with a SQL error. This type of error message is surrounded with a red box to highlight the problem. Note that the success message has code that has been commented out that could also show a successful return code with a green box. ```errormsg(message)```The final error message is based on an error occuring in the SQL code that was executed. This code will parse the message returned from the ibm_db interface and parse it to return only the error message portion (and not all of the wrapper code from the driver).```db2_error(quiet,connect=False)```The quiet flag is passed to the db2_error routine so that messages can be suppressed if the user wishes to ignore them with the -q flag. A good example of this is dropping a table that does not exist. We know that an error will be thrown so we can ignore it. The information that the db2_error routine gets is from the stmt_errormsg() function from within the ibm_db driver. The db2_error function should only be called after a SQL failure otherwise there will be no diagnostic information returned from stmt_errormsg().If the connect flag is True, the routine will get the SQLSTATE and SQLCODE from the connection error message rather than a statement error message.
###Code
def db2_error(quiet,connect=False):
global sqlerror, sqlcode, sqlstate, _environment
try:
if (connect == False):
errmsg = ibm_db.stmt_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
else:
errmsg = ibm_db.conn_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
sqlerror = errmsg
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
except:
errmsg = "Unknown error."
sqlcode = -99999
sqlstate = "-99999"
sqlerror = errmsg
return
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
if quiet == True: return
if (errmsg == ""): return
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html+errmsg+"</p>"))
else:
print(errmsg)
# Print out an error message
def errormsg(message):
global _environment
if (message != ""):
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + message + "</p>"))
else:
print(message)
def success(message):
if (message != ""):
print(message)
return
def debug(message,error=False):
global _environment
if (_environment["jupyter"] == True):
spacer = "<br>" + " "
else:
spacer = "\n "
if (message != ""):
lines = message.split('\n')
msg = ""
indent = 0
for line in lines:
delta = line.count("(") - line.count(")")
if (msg == ""):
msg = line
indent = indent + delta
else:
if (delta < 0): indent = indent + delta
msg = msg + spacer * (indent*2) + line
if (delta > 0): indent = indent + delta
if (indent < 0): indent = 0
if (error == True):
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
else:
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#008000; background-color:#e6ffe6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + msg + "</pre></p>"))
else:
print(msg)
return
###Output
_____no_output_____
###Markdown
Macro ProcessorA macro is used to generate SQL to be executed by overriding or creating a new keyword. For instance, the base `%sql` command does not understand the `LIST TABLES` command which is usually used in conjunction with the `CLP` processor. Rather than specifically code this in the base `db2.ipynb` file, we can create a macro that can execute this code for us.There are three routines that deal with macros. - checkMacro is used to find the macro calls in a string. All macros are sent to parseMacro for checking.- runMacro will evaluate the macro and return the string to the parse- subvars is used to track the variables used as part of a macro call.- setMacro is used to catalog a macro Set MacroThis code will catalog a macro call.
###Code
def setMacro(inSQL,parms):
global _macros
names = parms.split()
if (len(names) < 2):
errormsg("No command name supplied.")
return None
macroName = names[1].upper()
_macros[macroName] = inSQL
return
###Output
_____no_output_____
###Markdown
Check MacroThis code will check to see if there is a macro command in the SQL. It will take the SQL that is supplied and strip out three values: the first and second keywords, and the remainder of the parameters.For instance, consider the following statement:```CREATE DATABASE GEORGE options....```The name of the macro that we want to run is called `CREATE`. We know that there is a SQL command called `CREATE` but this code will call the macro first to see if needs to run any special code. For instance, `CREATE DATABASE` is not part of the `db2.ipynb` syntax, but we can add it in by using a macro.The check macro logic will strip out the subcommand (`DATABASE`) and place the remainder of the string after `DATABASE` in options.
###Code
def checkMacro(in_sql):
global _macros
if (len(in_sql) == 0): return(in_sql) # Nothing to do
tokens = parseArgs(in_sql,None) # Take the string and reduce into tokens
macro_name = tokens[0].upper() # Uppercase the name of the token
if (macro_name not in _macros):
return(in_sql) # No macro by this name so just return the string
result = runMacro(_macros[macro_name],in_sql,tokens) # Execute the macro using the tokens we found
return(result) # Runmacro will either return the original SQL or the new one
###Output
_____no_output_____
###Markdown
Split AssignmentThis routine will return the name of a variable and it's value when the format is x=y. If y is enclosed in quotes, the quotes are removed.
###Code
def splitassign(arg):
var_name = "null"
var_value = "null"
arg = arg.strip()
eq = arg.find("=")
if (eq != -1):
var_name = arg[:eq].strip()
temp_value = arg[eq+1:].strip()
if (temp_value != ""):
ch = temp_value[0]
if (ch in ["'",'"']):
if (temp_value[-1:] == ch):
var_value = temp_value[1:-1]
else:
var_value = temp_value
else:
var_value = temp_value
else:
var_value = arg
return var_name, var_value
###Output
_____no_output_____
###Markdown
Parse Args The commands that are used in the macros need to be parsed into their separate tokens. The tokens are separated by blanks and strings that enclosed in quotes are kept together.
###Code
def parseArgs(argin,_vars):
quoteChar = ""
inQuote = False
inArg = True
args = []
arg = ''
for ch in argin.lstrip():
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
arg = arg + ch #z
else:
arg = arg + ch
elif (ch == "\"" or ch == "\'"): # Do we have a quote
quoteChar = ch
arg = arg + ch #z
inQuote = True
elif (ch == " "):
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
else:
args.append("null")
arg = ""
else:
arg = arg + ch
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
return(args)
###Output
_____no_output_____
###Markdown
Run MacroThis code will execute the body of the macro and return the results for that macro call.
###Code
def runMacro(script,in_sql,tokens):
result = ""
runIT = True
code = script.split("\n")
level = 0
runlevel = [True,False,False,False,False,False,False,False,False,False]
ifcount = 0
_vars = {}
for i in range(0,len(tokens)):
vstr = str(i)
_vars[vstr] = tokens[i]
if (len(tokens) == 0):
_vars["argc"] = "0"
else:
_vars["argc"] = str(len(tokens)-1)
for line in code:
line = line.strip()
if (line == "" or line == "\n"): continue
if (line[0] == "#"): continue # A comment line starts with a # in the first position of the line
args = parseArgs(line,_vars) # Get all of the arguments
if (args[0] == "if"):
ifcount = ifcount + 1
if (runlevel[level] == False): # You can't execute this statement
continue
level = level + 1
if (len(args) < 4):
print("Macro: Incorrect number of arguments for the if clause.")
return insql
arg1 = args[1]
arg2 = args[3]
if (len(arg2) > 2):
ch1 = arg2[0]
ch2 = arg2[-1:]
if (ch1 in ['"',"'"] and ch1 == ch2):
arg2 = arg2[1:-1].strip()
op = args[2]
if (op in ["=","=="]):
if (arg1 == arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<=","=<"]):
if (arg1 <= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">=","=>"]):
if (arg1 >= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<>","!="]):
if (arg1 != arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<"]):
if (arg1 < arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">"]):
if (arg1 > arg2):
runlevel[level] = True
else:
runlevel[level] = False
else:
print("Macro: Unknown comparison operator in the if statement:" + op)
continue
elif (args[0] in ["exit","echo"] and runlevel[level] == True):
msg = ""
for msgline in args[1:]:
if (msg == ""):
msg = subvars(msgline,_vars)
else:
msg = msg + " " + subvars(msgline,_vars)
if (msg != ""):
if (args[0] == "echo"):
debug(msg,error=False)
else:
debug(msg,error=True)
if (args[0] == "exit"): return ''
elif (args[0] == "pass" and runlevel[level] == True):
pass
elif (args[0] == "var" and runlevel[level] == True):
value = ""
for val in args[2:]:
if (value == ""):
value = subvars(val,_vars)
else:
value = value + " " + subvars(val,_vars)
value.strip()
_vars[args[1]] = value
elif (args[0] == 'else'):
if (ifcount == level):
runlevel[level] = not runlevel[level]
elif (args[0] == 'return' and runlevel[level] == True):
return(result)
elif (args[0] == "endif"):
ifcount = ifcount - 1
if (ifcount < level):
level = level - 1
if (level < 0):
print("Macro: Unmatched if/endif pairs.")
return ''
else:
if (runlevel[level] == True):
if (result == ""):
result = subvars(line,_vars)
else:
result = result + "\n" + subvars(line,_vars)
return(result)
###Output
_____no_output_____
###Markdown
Substitute VarsThis routine is used by the runMacro program to track variables that are used within Macros. These are kept separate from the rest of the code.
###Code
def subvars(script,_vars):
if (_vars == None): return script
remainder = script
result = ""
done = False
while done == False:
bv = remainder.find("{")
if (bv == -1):
done = True
continue
ev = remainder.find("}")
if (ev == -1):
done = True
continue
result = result + remainder[:bv]
vvar = remainder[bv+1:ev]
remainder = remainder[ev+1:]
upper = False
allvars = False
if (vvar[0] == "^"):
upper = True
vvar = vvar[1:]
elif (vvar[0] == "*"):
vvar = vvar[1:]
allvars = True
else:
pass
if (vvar in _vars):
if (upper == True):
items = _vars[vvar].upper()
elif (allvars == True):
try:
iVar = int(vvar)
except:
return(script)
items = ""
sVar = str(iVar)
while sVar in _vars:
if (items == ""):
items = _vars[sVar]
else:
items = items + " " + _vars[sVar]
iVar = iVar + 1
sVar = str(iVar)
else:
items = _vars[vvar]
else:
if (allvars == True):
items = ""
else:
items = "null"
result = result + items
if (remainder != ""):
result = result + remainder
return(result)
###Output
_____no_output_____
###Markdown
SQL TimerThe calling format of this routine is:```count = sqlTimer(hdbc, runtime, inSQL)```This code runs the SQL string multiple times for one second (by default). The accuracy of the clock is not that great when you are running just one statement, so instead this routine will run the code multiple times for a second to give you an execution count. If you need to run the code for more than one second, the runtime value needs to be set to the number of seconds you want the code to run.The return result is always the number of times that the code executed. Note, that the program will skip reading the data if it is a SELECT statement so it doesn't included fetch time for the answer set.
###Code
def sqlTimer(hdbc, runtime, inSQL):
count = 0
t_end = time.time() + runtime
while time.time() < t_end:
try:
stmt = ibm_db.exec_immediate(hdbc,inSQL)
if (stmt == False):
db2_error(flag(["-q","-quiet"]))
return(-1)
ibm_db.free_result(stmt)
except Exception as err:
db2_error(False)
return(-1)
count = count + 1
return(count)
###Output
_____no_output_____
###Markdown
Split ArgsThis routine takes as an argument a string and then splits the arguments according to the following logic:* If the string starts with a `(` character, it will check the last character in the string and see if it is a `)` and then remove those characters* Every parameter is separated by a comma `,` and commas within quotes are ignored* Each parameter returned will have three values returned - one for the value itself, an indicator which will be either True if it was quoted, or False if not, and True or False if it is numeric.Example:``` "abcdef",abcdef,456,"856"```Three values would be returned:```[abcdef,True,False],[abcdef,False,False],[456,False,True],[856,True,False]```Any quoted string will be False for numeric. The way that the parameters are handled are up to the calling program. However, in the case of Db2, the quoted strings must be in single quotes so any quoted parameter using the double quotes `"` must be wrapped with single quotes. There is always a possibility that a string contains single quotes (i.e. O'Connor) so any substituted text should use `''` so that Db2 can properly interpret the string. This routine does not adjust the strings with quotes, and depends on the variable subtitution routine to do that.
###Code
def splitargs(arguments):
import types
# String the string and remove the ( and ) characters if they at the beginning and end of the string
results = []
step1 = arguments.strip()
if (len(step1) == 0): return(results) # Not much to do here - no args found
if (step1[0] == '('):
if (step1[-1:] == ')'):
step2 = step1[1:-1]
step2 = step2.strip()
else:
step2 = step1
else:
step2 = step1
# Now we have a string without brackets. Start scanning for commas
quoteCH = ""
pos = 0
arg = ""
args = []
while pos < len(step2):
ch = step2[pos]
if (quoteCH == ""): # Are we in a quote?
if (ch in ('"',"'")): # Check to see if we are starting a quote
quoteCH = ch
arg = arg + ch
pos += 1
elif (ch == ","): # Are we at the end of a parameter?
arg = arg.strip()
args.append(arg)
arg = ""
inarg = False
pos += 1
else: # Continue collecting the string
arg = arg + ch
pos += 1
else:
if (ch == quoteCH): # Are we at the end of a quote?
arg = arg + ch # Add the quote to the string
pos += 1 # Increment past the quote
quoteCH = "" # Stop quote checking (maybe!)
else:
pos += 1
arg = arg + ch
if (quoteCH != ""): # So we didn't end our string
arg = arg.strip()
args.append(arg)
elif (arg != ""): # Something left over as an argument
arg = arg.strip()
args.append(arg)
else:
pass
results = []
for arg in args:
result = []
if (len(arg) > 0):
if (arg[0] in ('"',"'")):
value = arg[1:-1]
isString = True
isNumber = False
else:
isString = False
isNumber = False
try:
value = eval(arg)
if (type(value) == int):
isNumber = True
elif (isinstance(value,float) == True):
isNumber = True
else:
value = arg
except:
value = arg
else:
value = ""
isString = False
isNumber = False
result = [value,isString,isNumber]
results.append(result)
return results
###Output
_____no_output_____
###Markdown
SQL ParserThe calling format of this routine is:```sql_cmd, parameter_list, encoded_sql = sqlParser(sql_input)```This code will look at the SQL string that has been passed to it and parse it into four values:- sql_cmd: First command in the list (so this may not be the actual SQL command)- parameter_list: the values of the parameters that need to passed to the execute/pandas code- encoded_sql: SQL with the parameters removed if there are any (replaced with ? markers)
###Code
def sqlParser(sqlin,local_ns):
sql_cmd = ""
encoded_sql = sqlin
firstCommand = "(?:^\s*)([a-zA-Z]+)(?:\s+.*|$)"
findFirst = re.match(firstCommand,sqlin)
if (findFirst == None): # We did not find a match so we just return the empty string
return sql_cmd, encoded_sql
cmd = findFirst.group(1)
sql_cmd = cmd.upper()
#
# Scan the input string looking for variables in the format :var. If no : is found just return.
# Var must be alpha+number+_ to be valid
#
if (':' not in sqlin): # A quick check to see if parameters are in here, but not fool-proof!
return sql_cmd, encoded_sql
inVar = False
inQuote = ""
varName = ""
encoded_sql = ""
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
for ch in sqlin:
if (inVar == True): # We are collecting the name of a variable
if (ch.upper() in "@_ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789[]"):
varName = varName + ch
continue
else:
if (varName == ""):
encode_sql = encoded_sql + ":"
elif (varName[0] in ('[',']')):
encoded_sql = encoded_sql + ":" + varName
else:
if (ch == '.'): # If the variable name is stopped by a period, assume no quotes are used
flag_quotes = False
else:
flag_quotes = True
varValue, varType = getContents(varName,flag_quotes,local_ns)
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == RAW):
encoded_sql = encoded_sql + varValue
elif (varType == LIST):
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
flag_quotes = True
try:
if (v.find('0x') == 0): # Just guessing this is a hex value at beginning
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
encoded_sql = encoded_sql + ch
varName = ""
inVar = False
elif (inQuote != ""):
encoded_sql = encoded_sql + ch
if (ch == inQuote): inQuote = ""
elif (ch in ("'",'"')):
encoded_sql = encoded_sql + ch
inQuote = ch
elif (ch == ":"): # This might be a variable
varName = ""
inVar = True
else:
encoded_sql = encoded_sql + ch
if (inVar == True):
varValue, varType = getContents(varName,True,local_ns) # We assume the end of a line is quoted
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == LIST):
flag_quotes = True
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
try:
if (v.find('0x') == 0): # Just guessing this is a hex value
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
return sql_cmd, encoded_sql
###Output
_____no_output_____
###Markdown
Variable Contents FunctionThe calling format of this routine is:```value = getContents(varName,quote,name_space)```This code will take the name of a variable as input and return the contents of that variable. If the variable is not found then the program will return None which is the equivalent to empty or null. Note that this function looks at the global variable pool for Python so it is possible that the wrong version of variable is returned if it is used in different functions. For this reason, any variables used in SQL statements should use a unique namimg convention if possible.The other thing that this function does is replace single quotes with two quotes. The reason for doing this is that Db2 will convert two single quotes into one quote when dealing with strings. This avoids problems when dealing with text that contains multiple quotes within the string. Note that this substitution is done only for single quote characters since the double quote character is used by Db2 for naming columns that are case sensitive or contain special characters.If the quote value is True, the field will have quotes around it. The name_space is the variables currently that are registered in Python.
###Code
def getContents(varName,flag_quotes,local_ns):
#
# Get the contents of the variable name that is passed to the routine. Only simple
# variables are checked, i.e. arrays and lists are not parsed
#
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
DICT = 4
try:
value = eval(varName,None,local_ns) # globals()[varName] # eval(varName)
except:
return(None,STRING)
if (isinstance(value,dict) == True): # Check to see if this is JSON dictionary
return(addquotes(value,flag_quotes),STRING)
elif(isinstance(value,list) == True): # List - tricky
return(value,LIST)
elif (isinstance(value,int) == True): # Integer value
return(value,NUMBER)
elif (isinstance(value,float) == True): # Float value
return(value,NUMBER)
else:
try:
# The pattern needs to be in the first position (0 in Python terms)
if (value.find('0x') == 0): # Just guessing this is a hex value
return(value,RAW)
else:
return(addquotes(value,flag_quotes),STRING) # String
except:
return(addquotes(str(value),flag_quotes),RAW)
###Output
_____no_output_____
###Markdown
Add QuotesQuotes are a challenge when dealing with dictionaries and Db2. Db2 wants strings delimited with single quotes, while Dictionaries use double quotes. That wouldn't be a problems except imbedded single quotes within these dictionaries will cause things to fail. This routine attempts to double-quote the single quotes within the dicitonary.
###Code
def addquotes(inString,flag_quotes):
if (isinstance(inString,dict) == True): # Check to see if this is JSON dictionary
serialized = json.dumps(inString)
else:
serialized = inString
# Replace single quotes with '' (two quotes) and wrap everything in single quotes
if (flag_quotes == False):
return(serialized)
else:
return("'"+serialized.replace("'","''")+"'") # Convert single quotes to two single quotes
###Output
_____no_output_____
###Markdown
Create the SAMPLE Database TablesThe calling format of this routine is:```db2_create_sample(quiet)```There are a lot of examples that depend on the data within the SAMPLE database. If you are running these examples and the connection is not to the SAMPLE database, then this code will create the two (EMPLOYEE, DEPARTMENT) tables that are used by most examples. If the function finds that these tables already exist, then nothing is done. If the tables are missing then they will be created with the same data as in the SAMPLE database.The quiet flag tells the program not to print any messages when the creation of the tables is complete.
###Code
def db2_create_sample(quiet):
create_department = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='DEPARTMENT' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE DEPARTMENT(DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6),ADMRDEPT CHAR(3) NOT NULL)');
EXECUTE IMMEDIATE('INSERT INTO DEPARTMENT VALUES
(''A00'',''SPIFFY COMPUTER SERVICE DIV.'',''000010'',''A00''),
(''B01'',''PLANNING'',''000020'',''A00''),
(''C01'',''INFORMATION CENTER'',''000030'',''A00''),
(''D01'',''DEVELOPMENT CENTER'',NULL,''A00''),
(''D11'',''MANUFACTURING SYSTEMS'',''000060'',''D01''),
(''D21'',''ADMINISTRATION SYSTEMS'',''000070'',''D01''),
(''E01'',''SUPPORT SERVICES'',''000050'',''A00''),
(''E11'',''OPERATIONS'',''000090'',''E01''),
(''E21'',''SOFTWARE SUPPORT'',''000100'',''E01''),
(''F22'',''BRANCH OFFICE F2'',NULL,''E01''),
(''G22'',''BRANCH OFFICE G2'',NULL,''E01''),
(''H22'',''BRANCH OFFICE H2'',NULL,''E01''),
(''I22'',''BRANCH OFFICE I2'',NULL,''E01''),
(''J22'',''BRANCH OFFICE J2'',NULL,''E01'')');
END IF;
END"""
%sql -d -q {create_department}
create_employee = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='EMPLOYEE' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE EMPLOYEE(
EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1),
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3),
PHONENO CHAR(4),
HIREDATE DATE,
JOB CHAR(8),
EDLEVEL SMALLINT NOT NULL,
SEX CHAR(1),
BIRTHDATE DATE,
SALARY DECIMAL(9,2),
BONUS DECIMAL(9,2),
COMM DECIMAL(9,2)
)');
EXECUTE IMMEDIATE('INSERT INTO EMPLOYEE VALUES
(''000010'',''CHRISTINE'',''I'',''HAAS'' ,''A00'',''3978'',''1995-01-01'',''PRES '',18,''F'',''1963-08-24'',152750.00,1000.00,4220.00),
(''000020'',''MICHAEL'' ,''L'',''THOMPSON'' ,''B01'',''3476'',''2003-10-10'',''MANAGER '',18,''M'',''1978-02-02'',94250.00,800.00,3300.00),
(''000030'',''SALLY'' ,''A'',''KWAN'' ,''C01'',''4738'',''2005-04-05'',''MANAGER '',20,''F'',''1971-05-11'',98250.00,800.00,3060.00),
(''000050'',''JOHN'' ,''B'',''GEYER'' ,''E01'',''6789'',''1979-08-17'',''MANAGER '',16,''M'',''1955-09-15'',80175.00,800.00,3214.00),
(''000060'',''IRVING'' ,''F'',''STERN'' ,''D11'',''6423'',''2003-09-14'',''MANAGER '',16,''M'',''1975-07-07'',72250.00,500.00,2580.00),
(''000070'',''EVA'' ,''D'',''PULASKI'' ,''D21'',''7831'',''2005-09-30'',''MANAGER '',16,''F'',''2003-05-26'',96170.00,700.00,2893.00),
(''000090'',''EILEEN'' ,''W'',''HENDERSON'' ,''E11'',''5498'',''2000-08-15'',''MANAGER '',16,''F'',''1971-05-15'',89750.00,600.00,2380.00),
(''000100'',''THEODORE'' ,''Q'',''SPENSER'' ,''E21'',''0972'',''2000-06-19'',''MANAGER '',14,''M'',''1980-12-18'',86150.00,500.00,2092.00),
(''000110'',''VINCENZO'' ,''G'',''LUCCHESSI'' ,''A00'',''3490'',''1988-05-16'',''SALESREP'',19,''M'',''1959-11-05'',66500.00,900.00,3720.00),
(''000120'',''SEAN'' ,'' '',''O`CONNELL'' ,''A00'',''2167'',''1993-12-05'',''CLERK '',14,''M'',''1972-10-18'',49250.00,600.00,2340.00),
(''000130'',''DELORES'' ,''M'',''QUINTANA'' ,''C01'',''4578'',''2001-07-28'',''ANALYST '',16,''F'',''1955-09-15'',73800.00,500.00,1904.00),
(''000140'',''HEATHER'' ,''A'',''NICHOLLS'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''000150'',''BRUCE'' ,'' '',''ADAMSON'' ,''D11'',''4510'',''2002-02-12'',''DESIGNER'',16,''M'',''1977-05-17'',55280.00,500.00,2022.00),
(''000160'',''ELIZABETH'',''R'',''PIANKA'' ,''D11'',''3782'',''2006-10-11'',''DESIGNER'',17,''F'',''1980-04-12'',62250.00,400.00,1780.00),
(''000170'',''MASATOSHI'',''J'',''YOSHIMURA'' ,''D11'',''2890'',''1999-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',44680.00,500.00,1974.00),
(''000180'',''MARILYN'' ,''S'',''SCOUTTEN'' ,''D11'',''1682'',''2003-07-07'',''DESIGNER'',17,''F'',''1979-02-21'',51340.00,500.00,1707.00),
(''000190'',''JAMES'' ,''H'',''WALKER'' ,''D11'',''2986'',''2004-07-26'',''DESIGNER'',16,''M'',''1982-06-25'',50450.00,400.00,1636.00),
(''000200'',''DAVID'' ,'' '',''BROWN'' ,''D11'',''4501'',''2002-03-03'',''DESIGNER'',16,''M'',''1971-05-29'',57740.00,600.00,2217.00),
(''000210'',''WILLIAM'' ,''T'',''JONES'' ,''D11'',''0942'',''1998-04-11'',''DESIGNER'',17,''M'',''2003-02-23'',68270.00,400.00,1462.00),
(''000220'',''JENNIFER'' ,''K'',''LUTZ'' ,''D11'',''0672'',''1998-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',49840.00,600.00,2387.00),
(''000230'',''JAMES'' ,''J'',''JEFFERSON'' ,''D21'',''2094'',''1996-11-21'',''CLERK '',14,''M'',''1980-05-30'',42180.00,400.00,1774.00),
(''000240'',''SALVATORE'',''M'',''MARINO'' ,''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''2002-03-31'',48760.00,600.00,2301.00),
(''000250'',''DANIEL'' ,''S'',''SMITH'' ,''D21'',''0961'',''1999-10-30'',''CLERK '',15,''M'',''1969-11-12'',49180.00,400.00,1534.00),
(''000260'',''SYBIL'' ,''P'',''JOHNSON'' ,''D21'',''8953'',''2005-09-11'',''CLERK '',16,''F'',''1976-10-05'',47250.00,300.00,1380.00),
(''000270'',''MARIA'' ,''L'',''PEREZ'' ,''D21'',''9001'',''2006-09-30'',''CLERK '',15,''F'',''2003-05-26'',37380.00,500.00,2190.00),
(''000280'',''ETHEL'' ,''R'',''SCHNEIDER'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1976-03-28'',36250.00,500.00,2100.00),
(''000290'',''JOHN'' ,''R'',''PARKER'' ,''E11'',''4502'',''2006-05-30'',''OPERATOR'',12,''M'',''1985-07-09'',35340.00,300.00,1227.00),
(''000300'',''PHILIP'' ,''X'',''SMITH'' ,''E11'',''2095'',''2002-06-19'',''OPERATOR'',14,''M'',''1976-10-27'',37750.00,400.00,1420.00),
(''000310'',''MAUDE'' ,''F'',''SETRIGHT'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''000320'',''RAMLAL'' ,''V'',''MEHTA'' ,''E21'',''9990'',''1995-07-07'',''FIELDREP'',16,''M'',''1962-08-11'',39950.00,400.00,1596.00),
(''000330'',''WING'' ,'' '',''LEE'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''M'',''1971-07-18'',45370.00,500.00,2030.00),
(''000340'',''JASON'' ,''R'',''GOUNOT'' ,''E21'',''5698'',''1977-05-05'',''FIELDREP'',16,''M'',''1956-05-17'',43840.00,500.00,1907.00),
(''200010'',''DIAN'' ,''J'',''HEMMINGER'' ,''A00'',''3978'',''1995-01-01'',''SALESREP'',18,''F'',''1973-08-14'',46500.00,1000.00,4220.00),
(''200120'',''GREG'' ,'' '',''ORLANDO'' ,''A00'',''2167'',''2002-05-05'',''CLERK '',14,''M'',''1972-10-18'',39250.00,600.00,2340.00),
(''200140'',''KIM'' ,''N'',''NATZ'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''200170'',''KIYOSHI'' ,'' '',''YAMAMOTO'' ,''D11'',''2890'',''2005-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',64680.00,500.00,1974.00),
(''200220'',''REBA'' ,''K'',''JOHN'' ,''D11'',''0672'',''2005-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',69840.00,600.00,2387.00),
(''200240'',''ROBERT'' ,''M'',''MONTEVERDE'',''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''1984-03-31'',37760.00,600.00,2301.00),
(''200280'',''EILEEN'' ,''R'',''SCHWARTZ'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1966-03-28'',46250.00,500.00,2100.00),
(''200310'',''MICHELLE'' ,''F'',''SPRINGER'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''200330'',''HELENA'' ,'' '',''WONG'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''F'',''1971-07-18'',35370.00,500.00,2030.00),
(''200340'',''ROY'' ,''R'',''ALONZO'' ,''E21'',''5698'',''1997-07-05'',''FIELDREP'',16,''M'',''1956-05-17'',31840.00,500.00,1907.00)');
END IF;
END"""
%sql -d -q {create_employee}
if (quiet == False): success("Sample tables [EMPLOYEE, DEPARTMENT] created.")
###Output
_____no_output_____
###Markdown
Check optionThis function will return the original string with the option removed, and a flag or true or false of the value is found.```args, flag = checkOption(option_string, option, false_value, true_value)```Options are specified with a -x where x is the character that we are searching for. It may actually be more than one character long like -pb/-pi/etc... The false and true values are optional. By default these are the boolean values of T/F but for some options it could be a character string like ';' versus '@' for delimiters.
###Code
def checkOption(args_in, option, vFalse=False, vTrue=True):
args_out = args_in.strip()
found = vFalse
if (args_out != ""):
if (args_out.find(option) >= 0):
args_out = args_out.replace(option," ")
args_out = args_out.strip()
found = vTrue
return args_out, found
###Output
_____no_output_____
###Markdown
Plot DataThis function will plot the data that is returned from the answer set. The plot value determines how we display the data. 1=Bar, 2=Pie, 3=Line, 4=Interactive.```plotData(flag_plot, hdbi, sql, parms)```The hdbi is the ibm_db_sa handle that is used by pandas dataframes to run the sql. The parms contains any of the parameters required to run the query.
###Code
def plotData(hdbi, sql):
try:
df = pandas.read_sql(sql,hdbi)
except Exception as err:
db2_error(False)
return
if df.empty:
errormsg("No results returned")
return
col_count = len(df.columns)
if flag(["-pb","-bar"]): # Plot 1 = bar chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='bar');
_ = plt.plot();
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
df.plot(kind='bar',x=xlabel,y=ylabel);
_ = plt.plot();
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot.bar();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pp","-pie"]): # Plot 2 = pie chart
if (col_count in (1,2)):
if (col_count == 1):
df.index = df.index + 1
yname = df.columns.values[0]
_ = df.plot(kind='pie',y=yname);
else:
xlabel = df.columns.values[0]
xname = df[xlabel].tolist()
yname = df.columns.values[1]
_ = df.plot(kind='pie',y=yname,labels=xname);
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pl","-line"]): # Plot 3 = line chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='line');
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
_ = df.plot(kind='line',x=xlabel,y=ylabel) ;
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot();
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
else:
return
###Output
_____no_output_____
###Markdown
Find a ProcedureThis routine will check to see if a procedure exists with the SCHEMA/NAME (or just NAME if no schema is supplied) and returns the number of answer sets returned. Possible values are 0, 1 (or greater) or None. If None is returned then we can't find the procedure anywhere.
###Code
def findProc(procname):
global _hdbc, _hdbi, _connected, _runtime
# Split the procedure name into schema.procname if appropriate
upper_procname = procname.upper()
schema, proc = split_string(upper_procname,".") # Expect schema.procname
if (proc == None):
proc = schema
# Call ibm_db.procedures to see if the procedure does exist
schema = "%"
try:
stmt = ibm_db.procedures(_hdbc, None, schema, proc)
if (stmt == False): # Error executing the code
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
result = ibm_db.fetch_tuple(stmt)
resultsets = result[5]
if (resultsets >= 1): resultsets = 1
return resultsets
except Exception as err:
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
###Output
_____no_output_____
###Markdown
Parse Call ArgumentsThis code will parse a SQL call name(parm1,...) and return the name and the parameters in the call.
###Code
def parseCallArgs(macro):
quoteChar = ""
inQuote = False
inParm = False
ignore = False
name = ""
parms = []
parm = ''
sqlin = macro.replace("\n","")
sqlin.lstrip()
for ch in sqlin:
if (inParm == False):
# We hit a blank in the name, so ignore everything after the procedure name until a ( is found
if (ch == " "):
ignore == True
elif (ch == "("): # Now we have parameters to send to the stored procedure
inParm = True
else:
if (ignore == False): name = name + ch # The name of the procedure (and no blanks)
else:
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
else:
parm = parm + ch
elif (ch in ("\"","\'","[")): # Do we have a quote
if (ch == "["):
quoteChar = "]"
else:
quoteChar = ch
inQuote = True
elif (ch == ")"):
if (parm != ""):
parms.append(parm)
parm = ""
break
elif (ch == ","):
if (parm != ""):
parms.append(parm)
else:
parms.append("null")
parm = ""
else:
parm = parm + ch
if (inParm == True):
if (parm != ""):
parms.append(parm_value)
return(name,parms)
###Output
_____no_output_____
###Markdown
Get ColumnsGiven a statement handle, determine what the column names are or the data types.
###Code
def getColumns(stmt):
columns = []
types = []
colcount = 0
try:
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
while (colname != False):
columns.append(colname)
types.append(coltype)
colcount += 1
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
return columns,types
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Call a ProcedureThe CALL statement is used for execution of a stored procedure. The format of the CALL statement is:```CALL PROC_NAME(x,y,z,...)```Procedures allow for the return of answer sets (cursors) as well as changing the contents of the parameters being passed to the procedure. In this implementation, the CALL function is limited to returning one answer set (or nothing). If you want to use more complex stored procedures then you will have to use the native python libraries.
###Code
def parseCall(hdbc, inSQL, local_ns):
global _hdbc, _hdbi, _connected, _runtime, _environment
# Check to see if we are connected first
if (_connected == False): # Check if you are connected
db2_doConnect()
if _connected == False: return None
remainder = inSQL.strip()
procName, procArgs = parseCallArgs(remainder[5:]) # Assume that CALL ... is the format
resultsets = findProc(procName)
if (resultsets == None): return None
argvalues = []
if (len(procArgs) > 0): # We have arguments to consider
for arg in procArgs:
varname = arg
if (len(varname) > 0):
if (varname[0] == ":"):
checkvar = varname[1:]
varvalue = getContents(checkvar,True,local_ns)
if (varvalue == None):
errormsg("Variable " + checkvar + " is not defined.")
return None
argvalues.append(varvalue)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
else:
argvalues.append(None)
try:
if (len(procArgs) > 0):
argtuple = tuple(argvalues)
result = ibm_db.callproc(_hdbc,procName,argtuple)
stmt = result[0]
else:
result = ibm_db.callproc(_hdbc,procName)
stmt = result
if (resultsets != 0 and stmt != None):
columns, types = getColumns(stmt)
if (columns == None): return None
rows = []
try:
rowlist = ibm_db.fetch_tuple(stmt)
except:
rowlist = None
while ( rowlist ) :
row = []
colcount = 0
for col in rowlist:
try:
if (types[colcount] in ["int","bigint"]):
row.append(int(col))
elif (types[colcount] in ["decimal","real"]):
row.append(float(col))
elif (types[colcount] in ["date","time","timestamp"]):
row.append(str(col))
else:
row.append(col)
except:
row.append(col)
colcount += 1
rows.append(row)
try:
rowlist = ibm_db.fetch_tuple(stmt)
except:
rowlist = None
if flag(["-r","-array"]):
rows.insert(0,columns)
if len(procArgs) > 0:
allresults = []
allresults.append(rows)
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return rows
else:
df = pandas.DataFrame.from_records(rows,columns=columns)
if flag("-grid") or _settings['display'] == 'GRID':
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
return df
else:
if len(procArgs) > 0:
allresults = []
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return None
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Parse Prepare/ExecuteThe PREPARE statement is used for repeated execution of a SQL statement. The PREPARE statement has the format:```stmt = PREPARE SELECT EMPNO FROM EMPLOYEE WHERE WORKDEPT=? AND SALARY<?```The SQL statement that you want executed is placed after the PREPARE statement with the location of variables marked with ? (parameter) markers. The variable stmt contains the prepared statement that need to be passed to the EXECUTE statement. The EXECUTE statement has the format:```EXECUTE :x USING z, y, s ```The first variable (:x) is the name of the variable that you assigned the results of the prepare statement. The values after the USING clause are substituted into the prepare statement where the ? markers are found. If the values in USING clause are variable names (z, y, s), a **link** is created to these variables as part of the execute statement. If you use the variable subsitution form of variable name (:z, :y, :s), the **contents** of the variable are placed into the USING clause. Normally this would not make much of a difference except when you are dealing with binary strings or JSON strings where the quote characters may cause some problems when subsituted into the statement.
###Code
def parsePExec(hdbc, inSQL):
import ibm_db
global _stmt, _stmtID, _stmtSQL, sqlcode
cParms = inSQL.split()
parmCount = len(cParms)
if (parmCount == 0): return(None) # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "PREPARE"): # Prepare the following SQL
uSQL = inSQL.upper()
found = uSQL.find("PREPARE")
sql = inSQL[found+7:].strip()
try:
pattern = "\?\*[0-9]+"
findparm = re.search(pattern,sql)
while findparm != None:
found = findparm.group(0)
count = int(found[2:])
markers = ('?,' * count)[:-1]
sql = sql.replace(found,markers)
findparm = re.search(pattern,sql)
stmt = ibm_db.prepare(hdbc,sql) # Check error code here
if (stmt == False):
db2_error(False)
return(False)
stmttext = str(stmt).strip()
stmtID = stmttext[33:48].strip()
if (stmtID in _stmtID) == False:
_stmt.append(stmt) # Prepare and return STMT to caller
_stmtID.append(stmtID)
else:
stmtIX = _stmtID.index(stmtID)
_stmt[stmtiX] = stmt
return(stmtID)
except Exception as err:
print(err)
db2_error(False)
return(False)
if (keyword == "EXECUTE"): # Execute the prepare statement
if (parmCount < 2): return(False) # No stmtID available
stmtID = cParms[1].strip()
if (stmtID in _stmtID) == False:
errormsg("Prepared statement not found or invalid.")
return(False)
stmtIX = _stmtID.index(stmtID)
stmt = _stmt[stmtIX]
try:
if (parmCount == 2): # Only the statement handle available
result = ibm_db.execute(stmt) # Run it
elif (parmCount == 3): # Not quite enough arguments
errormsg("Missing or invalid USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
else:
using = cParms[2].upper()
if (using != "USING"): # Bad syntax again
errormsg("Missing USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
uSQL = inSQL.upper()
found = uSQL.find("USING")
parmString = inSQL[found+5:].strip()
parmset = splitargs(parmString)
if (len(parmset) == 0):
errormsg("Missing parameters after the USING clause.")
sqlcode = -99999
return(False)
parms = []
parm_count = 0
CONSTANT = 0
VARIABLE = 1
const = [0]
const_cnt = 0
for v in parmset:
parm_count = parm_count + 1
if (v[1] == True or v[2] == True): # v[1] true if string, v[2] true if num
parm_type = CONSTANT
const_cnt = const_cnt + 1
if (v[2] == True):
if (isinstance(v[0],int) == True): # Integer value
sql_type = ibm_db.SQL_INTEGER
elif (isinstance(v[0],float) == True): # Float value
sql_type = ibm_db.SQL_DOUBLE
else:
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
const.append(v[0])
else:
parm_type = VARIABLE
# See if the variable has a type associated with it varname@type
varset = v[0].split("@")
parm_name = varset[0]
parm_datatype = "char"
# Does the variable exist?
if (parm_name not in globals()):
errormsg("SQL Execute parameter " + parm_name + " not found")
sqlcode = -99999
return(false)
if (len(varset) > 1): # Type provided
parm_datatype = varset[1]
if (parm_datatype == "dec" or parm_datatype == "decimal"):
sql_type = ibm_db.SQL_DOUBLE
elif (parm_datatype == "bin" or parm_datatype == "binary"):
sql_type = ibm_db.SQL_BINARY
elif (parm_datatype == "int" or parm_datatype == "integer"):
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
try:
if (parm_type == VARIABLE):
result = ibm_db.bind_param(stmt, parm_count, globals()[parm_name], ibm_db.SQL_PARAM_INPUT, sql_type)
else:
result = ibm_db.bind_param(stmt, parm_count, const[const_cnt], ibm_db.SQL_PARAM_INPUT, sql_type)
except:
result = False
if (result == False):
errormsg("SQL Bind on variable " + parm_name + " failed.")
sqlcode = -99999
return(false)
result = ibm_db.execute(stmt) # ,tuple(parms))
if (result == False):
errormsg("SQL Execute failed.")
return(False)
if (ibm_db.num_fields(stmt) == 0): return(True) # Command successfully completed
return(fetchResults(stmt))
except Exception as err:
db2_error(False)
return(False)
return(False)
return(False)
###Output
_____no_output_____
###Markdown
Fetch Result SetThis code will take the stmt handle and then produce a result set of rows as either an array (`-r`,`-array`) or as an array of json records (`-json`).
###Code
def fetchResults(stmt):
global sqlcode
rows = []
columns, types = getColumns(stmt)
# By default we assume that the data will be an array
is_array = True
# Check what type of data we want returned - array or json
if (flag(["-r","-array"]) == False):
# See if we want it in JSON format, if not it remains as an array
if (flag("-json") == True):
is_array = False
# Set column names to lowercase for JSON records
if (is_array == False):
columns = [col.lower() for col in columns] # Convert to lowercase for each of access
# First row of an array has the column names in it
if (is_array == True):
rows.append(columns)
result = ibm_db.fetch_tuple(stmt)
rowcount = 0
while (result):
rowcount += 1
if (is_array == True):
row = []
else:
row = {}
colcount = 0
for col in result:
try:
if (types[colcount] in ["int","bigint"]):
if (is_array == True):
row.append(int(col))
else:
row[columns[colcount]] = int(col)
elif (types[colcount] in ["decimal","real"]):
if (is_array == True):
row.append(float(col))
else:
row[columns[colcount]] = float(col)
elif (types[colcount] in ["date","time","timestamp"]):
if (is_array == True):
row.append(str(col))
else:
row[columns[colcount]] = str(col)
else:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
except:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
colcount += 1
rows.append(row)
result = ibm_db.fetch_tuple(stmt)
if (rowcount == 0):
sqlcode = 100
else:
sqlcode = 0
return rows
###Output
_____no_output_____
###Markdown
Parse CommitThere are three possible COMMIT verbs that can bs used:- COMMIT [WORK] - Commit the work in progress - The WORK keyword is not checked for- ROLLBACK - Roll back the unit of work- AUTOCOMMIT ON/OFF - Are statements committed on or off?The statement is passed to this routine and then checked.
###Code
def parseCommit(sql):
global _hdbc, _hdbi, _connected, _runtime, _stmt, _stmtID, _stmtSQL
if (_connected == False): return # Nothing to do if we are not connected
cParms = sql.split()
if (len(cParms) == 0): return # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "COMMIT"): # Commit the work that was done
try:
result = ibm_db.commit (_hdbc) # Commit the connection
if (len(cParms) > 1):
keyword = cParms[1].upper()
if (keyword == "HOLD"):
return
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "ROLLBACK"): # Rollback the work that was done
try:
result = ibm_db.rollback(_hdbc) # Rollback the connection
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "AUTOCOMMIT"): # Is autocommit on or off
if (len(cParms) > 1):
op = cParms[1].upper() # Need ON or OFF value
else:
return
try:
if (op == "OFF"):
ibm_db.autocommit(_hdbc, False)
elif (op == "ON"):
ibm_db.autocommit (_hdbc, True)
return
except Exception as err:
db2_error(False)
return
return
###Output
_____no_output_____
###Markdown
Set FlagsThis code will take the input SQL block and update the global flag list. The global flag list is just a list of options that are set at the beginning of a code block. The absence of a flag means it is false. If it exists it is true.
###Code
def setFlags(inSQL):
global _flags
_flags = [] # Delete all of the current flag settings
pos = 0
end = len(inSQL)-1
inFlag = False
ignore = False
outSQL = ""
flag = ""
while (pos <= end):
ch = inSQL[pos]
if (ignore == True):
outSQL = outSQL + ch
else:
if (inFlag == True):
if (ch != " "):
flag = flag + ch
else:
_flags.append(flag)
inFlag = False
else:
if (ch == "-"):
flag = "-"
inFlag = True
elif (ch == ' '):
outSQL = outSQL + ch
else:
outSQL = outSQL + ch
ignore = True
pos += 1
if (inFlag == True):
_flags.append(flag)
return outSQL
###Output
_____no_output_____
###Markdown
Check to see if flag ExistsThis function determines whether or not a flag exists in the global flag array. Absence of a value means it is false. The parameter can be a single value, or an array of values.
###Code
def flag(inflag):
global _flags
if isinstance(inflag,list):
for x in inflag:
if (x in _flags):
return True
return False
else:
if (inflag in _flags):
return True
else:
return False
###Output
_____no_output_____
###Markdown
Generate a list of SQL lines based on a delimiterNote that this function will make sure that quotes are properly maintained so that delimiters inside of quoted strings do not cause errors.
###Code
def splitSQL(inputString, delimiter):
pos = 0
arg = ""
results = []
quoteCH = ""
inSQL = inputString.strip()
if (len(inSQL) == 0): return(results) # Not much to do here - no args found
while pos < len(inSQL):
ch = inSQL[pos]
pos += 1
if (ch in ('"',"'")): # Is this a quote characters?
arg = arg + ch # Keep appending the characters to the current arg
if (ch == quoteCH): # Is this quote character we are in
quoteCH = ""
elif (quoteCH == ""): # Create the quote
quoteCH = ch
else:
None
elif (quoteCH != ""): # Still in a quote
arg = arg + ch
elif (ch == delimiter): # Is there a delimiter?
results.append(arg)
arg = ""
else:
arg = arg + ch
if (arg != ""):
results.append(arg)
return(results)
###Output
_____no_output_____
###Markdown
Main %sql Magic DefinitionThe main %sql Magic logic is found in this section of code. This code will register the Magic command and allow Jupyter notebooks to interact with Db2 by using this extension.
###Code
@magics_class
class DB2(Magics):
@needs_local_scope
@line_cell_magic
def sql(self, line, cell=None, local_ns=None):
# Before we event get started, check to see if you have connected yet. Without a connection we
# can't do anything. You may have a connection request in the code, so if that is true, we run those,
# otherwise we connect immediately
# If your statement is not a connect, and you haven't connected, we need to do it for you
global _settings, _environment
global _hdbc, _hdbi, _connected, _runtime, sqlstate, sqlerror, sqlcode, sqlelapsed
# If you use %sql (line) we just run the SQL. If you use %%SQL the entire cell is run.
flag_cell = False
flag_output = False
sqlstate = "0"
sqlerror = ""
sqlcode = 0
sqlelapsed = 0
start_time = time.time()
end_time = time.time()
# Macros gets expanded before anything is done
SQL1 = setFlags(line.strip())
SQL1 = checkMacro(SQL1) # Update the SQL if any macros are in there
SQL2 = cell
if flag("-sampledata"): # Check if you only want sample data loaded
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
db2_create_sample(flag(["-q","-quiet"]))
return
if SQL1 == "?" or flag(["-h","-help"]): # Are you asking for help
sqlhelp()
return
if len(SQL1) == 0 and SQL2 == None: return # Nothing to do here
# Check for help
if SQL1.upper() == "? CONNECT": # Are you asking for help on CONNECT
connected_help()
return
sqlType,remainder = sqlParser(SQL1,local_ns) # What type of command do you have?
if (sqlType == "CONNECT"): # A connect request
parseConnect(SQL1,local_ns)
return
elif (sqlType == "DEFINE"): # Create a macro from the body
result = setMacro(SQL2,remainder)
return
elif (sqlType == "OPTION"):
setOptions(SQL1)
return
elif (sqlType == 'COMMIT' or sqlType == 'ROLLBACK' or sqlType == 'AUTOCOMMIT'):
parseCommit(remainder)
return
elif (sqlType == "PREPARE"):
pstmt = parsePExec(_hdbc, remainder)
return(pstmt)
elif (sqlType == "EXECUTE"):
result = parsePExec(_hdbc, remainder)
return(result)
elif (sqlType == "CALL"):
result = parseCall(_hdbc, remainder, local_ns)
return(result)
else:
pass
sql = SQL1
if (sql == ""): sql = SQL2
if (sql == ""): return # Nothing to do here
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
if _settings["maxrows"] == -1: # Set the return result size
pandas.reset_option('display.max_rows')
else:
pandas.options.display.max_rows = _settings["maxrows"]
runSQL = re.sub('.*?--.*$',"",sql,flags=re.M)
remainder = runSQL.replace("\n"," ")
if flag(["-d","-delim"]):
sqlLines = splitSQL(remainder,"@")
else:
sqlLines = splitSQL(remainder,";")
flag_cell = True
# For each line figure out if you run it as a command (db2) or select (sql)
for sqlin in sqlLines: # Run each command
sqlin = checkMacro(sqlin) # Update based on any macros
sqlType, sql = sqlParser(sqlin,local_ns) # Parse the SQL
if (sql.strip() == ""): continue
if flag(["-e","-echo"]): debug(sql,False)
if flag("-t"):
cnt = sqlTimer(_hdbc, _settings["runtime"], sql) # Given the sql and parameters, clock the time
if (cnt >= 0): print("Total iterations in %s second(s): %s" % (_settings["runtime"],cnt))
return(cnt)
elif flag(["-pb","-bar","-pp","-pie","-pl","-line"]): # We are plotting some results
plotData(_hdbi, sql) # Plot the data and return
return
else:
try: # See if we have an answer set
stmt = ibm_db.prepare(_hdbc,sql)
if (ibm_db.num_fields(stmt) == 0): # No, so we just execute the code
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
continue
rowcount = ibm_db.num_rows(stmt)
if (rowcount == 0 and flag(["-q","-quiet"]) == False):
errormsg("No rows found.")
continue # Continue running
elif flag(["-r","-array","-j","-json"]): # raw, json, format json
row_count = 0
resultSet = []
try:
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
return
if flag("-j"): # JSON single output
row_count = 0
json_results = []
while( ibm_db.fetch_row(stmt) ):
row_count = row_count + 1
jsonVal = ibm_db.result(stmt,0)
jsonDict = json.loads(jsonVal)
json_results.append(jsonDict)
flag_output = True
if (row_count == 0): sqlcode = 100
return(json_results)
else:
return(fetchResults(stmt))
except Exception as err:
db2_error(flag(["-q","-quiet"]))
return
else:
try:
df = pandas.read_sql(sql,_hdbi)
except Exception as err:
db2_error(False)
return
if (len(df) == 0):
sqlcode = 100
if (flag(["-q","-quiet"]) == False):
errormsg("No rows found")
continue
flag_output = True
if flag("-grid") or _settings['display'] == 'GRID': # Check to see if we can display the results
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
print(df.to_string())
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
pandas.options.display.max_rows = None
pandas.options.display.max_columns = None
return df # print(df.to_string())
else:
pandas.options.display.max_rows = _settings["maxrows"]
pandas.options.display.max_columns = None
return df # pdisplay(df) # print(df.to_string())
except:
db2_error(flag(["-q","-quiet"]))
continue # return
end_time = time.time()
sqlelapsed = end_time - start_time
if (flag_output == False and flag(["-q","-quiet"]) == False): print("Command completed.")
# Register the Magic extension in Jupyter
ip = get_ipython()
ip.register_magics(DB2)
load_settings()
success("Db2 Extensions Loaded.")
###Output
_____no_output_____
###Markdown
Pre-defined MacrosThese macros are used to simulate the LIST TABLES and DESCRIBE commands that are available from within the Db2 command line.
###Code
%%sql define LIST
#
# The LIST macro is used to list all of the tables in the current schema or for all schemas
#
var syntax Syntax: LIST TABLES [FOR ALL | FOR SCHEMA name]
#
# Only LIST TABLES is supported by this macro
#
if {^1} <> 'TABLES'
exit {syntax}
endif
#
# This SQL is a temporary table that contains the description of the different table types
#
WITH TYPES(TYPE,DESCRIPTION) AS (
VALUES
('A','Alias'),
('G','Created temporary table'),
('H','Hierarchy table'),
('L','Detached table'),
('N','Nickname'),
('S','Materialized query table'),
('T','Table'),
('U','Typed table'),
('V','View'),
('W','Typed view')
)
SELECT TABNAME, TABSCHEMA, T.DESCRIPTION FROM SYSCAT.TABLES S, TYPES T
WHERE T.TYPE = S.TYPE
#
# Case 1: No arguments - LIST TABLES
#
if {argc} == 1
AND OWNER = CURRENT USER
ORDER BY TABNAME, TABSCHEMA
return
endif
#
# Case 2: Need 3 arguments - LIST TABLES FOR ALL
#
if {argc} == 3
if {^2}&{^3} == 'FOR&ALL'
ORDER BY TABNAME, TABSCHEMA
return
endif
exit {syntax}
endif
#
# Case 3: Need FOR SCHEMA something here
#
if {argc} == 4
if {^2}&{^3} == 'FOR&SCHEMA'
AND TABSCHEMA = '{^4}'
ORDER BY TABNAME, TABSCHEMA
return
else
exit {syntax}
endif
endif
#
# Nothing matched - Error
#
exit {syntax}
%%sql define describe
#
# The DESCRIBE command can either use the syntax DESCRIBE TABLE <name> or DESCRIBE TABLE SELECT ...
#
var syntax Syntax: DESCRIBE [TABLE name | SELECT statement]
#
# Check to see what count of variables is... Must be at least 2 items DESCRIBE TABLE x or SELECT x
#
if {argc} < 2
exit {syntax}
endif
CALL ADMIN_CMD('{*0}');
###Output
_____no_output_____
###Markdown
Set the table formatting to left align a table in a cell. By default, tables are centered in a cell. Remove this cell if you don't want to change Jupyter notebook formatting for tables. In addition, we skip this code if you are running in a shell environment rather than a Jupyter notebook
###Code
#%%html
#<style>
# table {margin-left: 0 !important; text-align: left;}
#</style>
###Output
_____no_output_____
###Markdown
DB2 Jupyter Notebook ExtensionsVersion: 2020-07-15 This code is imported as a Jupyter notebook extension in any notebooks you create with DB2 code in it. Place the following line of code in any notebook that you want to use these commands with:&37;run db2.ipynbThis code defines a Jupyter/Python magic command called `%sql` which allows you to execute DB2 specific calls to the database. There are other packages available for manipulating databases, but this one has been specificallydesigned for demonstrating a number of the SQL features available in DB2.There are two ways of executing the `%sql` command. A single line SQL statement would use theline format of the magic command:%sql SELECT * FROM EMPLOYEEIf you have a large block of sql then you would place the %%sql command at the beginning of the block and thenplace the SQL statements into the remainder of the block. Using this form of the `%%sql` statement means that thenotebook cell can only contain SQL and no other statements.%%sqlSELECT * FROM EMPLOYEEORDER BY LASTNAMEYou can have multiple lines in the SQL block (`%%sql`). The default SQL delimiter is the semi-column (`;`).If you have scripts (triggers, procedures, functions) that use the semi-colon as part of the script, you will need to use the `-d` option to change the delimiter to an at "`@`" sign. %%sql -dSELECT * FROM EMPLOYEE@CREATE PROCEDURE ...@The `%sql` command allows most DB2 commands to execute and has a special version of the CONNECT statement. A CONNECT by itself will attempt to reconnect to the database using previously used settings. If it cannot connect, it will prompt the user for additional information. The CONNECT command has the following format:%sql CONNECT TO <database> USER <userid> USING <password | ?> HOST <ip address> PORT <port number>If you use a "`?`" for the password field, the system will prompt you for a password. This avoids typing the password as clear text on the screen. If a connection is not successful, the system will print the errormessage associated with the connect request.If the connection is successful, the parameters are saved on your system and will be used the next time yourun a SQL statement, or when you issue the %sql CONNECT command with no parameters. In addition to the -d option, there are a number different options that you can specify at the beginning of the SQL: - `-d, -delim` - Change SQL delimiter to "`@`" from "`;`" - `-q, -quiet` - Quiet results - no messages returned from the function - `-r, -array` - Return the result set as an array of values instead of a dataframe - `-t, -time` - Time the following SQL statement and return the number of times it executes in 1 second - `-j` - Format the first character column of the result set as a JSON record - `-json` - Return result set as an array of json records - `-a, -all` - Return all rows in answer set and do not limit display - `-grid` - Display the results in a scrollable grid - `-pb, -bar` - Plot the results as a bar chart - `-pl, -line` - Plot the results as a line chart - `-pp, -pie` - Plot the results as a pie chart - `-e, -echo` - Any macro expansions are displayed in an output box - `-sampledata` - Create and load the EMPLOYEE and DEPARTMENT tablesYou can pass python variables to the `%sql` command by using the `{}` braces with the name of thevariable inbetween. Note that you will need to place proper punctuation around the variable in the event theSQL command requires it. For instance, the following example will find employee '000010' in the EMPLOYEE table.empno = '000010'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO='{empno}'The other option is to use parameter markers. What you would need to do is use the name of the variable with a colon in front of it and the program will prepare the statement and then pass the variable to Db2 when the statement is executed. This allows you to create complex strings that might contain quote characters and other special characters and not have to worry about enclosing the string with the correct quotes. Note that you do not place the quotes around the variable even though it is a string.empno = '000020'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=:empno Development SQLThe previous set of `%sql` and `%%sql` commands deals with SQL statements and commands that are run in an interactive manner. There is a class of SQL commands that are more suited to a development environment where code is iterated or requires changing input. The commands that are associated with this form of SQL are:- AUTOCOMMIT- COMMIT/ROLLBACK- PREPARE - EXECUTEAutocommit is the default manner in which SQL statements are executed. At the end of the successful completion of a statement, the results are commited to the database. There is no concept of a transaction where multiple DML/DDL statements are considered one transaction. The `AUTOCOMMIT` command allows you to turn autocommit `OFF` or `ON`. This means that the set of SQL commands run after the `AUTOCOMMIT OFF` command are executed are not commited to the database until a `COMMIT` or `ROLLBACK` command is issued.`COMMIT` (`WORK`) will finalize all of the transactions (`COMMIT`) to the database and `ROLLBACK` will undo all of the changes. If you issue a `SELECT` statement during the execution of your block, the results will reflect all of your changes. If you `ROLLBACK` the transaction, the changes will be lost.`PREPARE` is typically used in a situation where you want to repeatidly execute a SQL statement with different variables without incurring the SQL compilation overhead. For instance:```x = %sql PREPARE SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=?for y in ['000010','000020','000030']: %sql execute :x using :y````EXECUTE` is used to execute a previously compiled statement. To retrieve the error codes that might be associated with any SQL call, the following variables are updated after every call:* SQLCODE* SQLSTATE* SQLERROR - Full error message retrieved from Db2 Install Db2 Python DriverIf the ibm_db driver is not installed on your system, the subsequent Db2 commands will fail. In order to install the Db2 driver, issue the following command from a Jupyter notebook cell:```!pip install --user ibm_db``` Db2 Jupyter ExtensionsThis section of code has the import statements and global variables defined for the remainder of the functions.
###Code
#
# Set up Jupyter MAGIC commands "sql".
# %sql will return results from a DB2 select statement or execute a DB2 command
#
# IBM 2019: George Baklarz
# Version 2019-10-03
#
from __future__ import print_function
from IPython.display import HTML as pHTML, Image as pImage, display as pdisplay, Javascript as Javascript
from IPython.core.magic import (Magics, magics_class, line_magic,
cell_magic, line_cell_magic, needs_local_scope)
import ibm_db
import pandas
import ibm_db_dbi
import json
import matplotlib
import matplotlib.pyplot as plt
import getpass
import os
import pickle
import time
import sys
import re
import warnings
warnings.filterwarnings("ignore")
# Python Hack for Input between 2 and 3
try:
input = raw_input
except NameError:
pass
_settings = {
"maxrows" : 10,
"maxgrid" : 5,
"runtime" : 1,
"display" : "PANDAS",
"database" : "",
"hostname" : "localhost",
"port" : "50000",
"protocol" : "TCPIP",
"uid" : "DB2INST1",
"pwd" : "password",
"ssl" : ""
}
_environment = {
"jupyter" : True,
"qgrid" : True
}
_display = {
'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': False,
'defaultColumnWidth': 150,
'rowHeight': 28,
'enableColumnReorder': False,
'enableTextSelectionOnCells': True,
'editable': False,
'autoEdit': False,
'explicitInitialization': True,
'maxVisibleRows': 5,
'minVisibleRows': 5,
'sortable': True,
'filterable': False,
'highlightSelectedCell': False,
'highlightSelectedRow': True
}
# Connection settings for statements
_connected = False
_hdbc = None
_hdbi = None
_stmt = []
_stmtID = []
_stmtSQL = []
_vars = {}
_macros = {}
_flags = []
_debug = False
# Db2 Error Messages and Codes
sqlcode = 0
sqlstate = "0"
sqlerror = ""
sqlelapsed = 0
# Check to see if QGrid is installed
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
# Check if we are running in iPython or Jupyter
try:
if (get_ipython().config == {}):
_environment['jupyter'] = False
_environment['qgrid'] = False
else:
_environment['jupyter'] = True
except:
_environment['jupyter'] = False
_environment['qgrid'] = False
###Output
_____no_output_____
###Markdown
OptionsThere are four options that can be set with the **`%sql`** command. These options are shown below with the default value shown in parenthesis.- **`MAXROWS n (10)`** - The maximum number of rows that will be displayed before summary information is shown. If the answer set is less than this number of rows, it will be completely shown on the screen. If the answer set is larger than this amount, only the first 5 rows and last 5 rows of the answer set will be displayed. If you want to display a very large answer set, you may want to consider using the grid option `-g` to display the results in a scrollable table. If you really want to show all results then setting MAXROWS to -1 will return all output.- **`MAXGRID n (5)`** - The maximum size of a grid display. When displaying a result set in a grid `-g`, the default size of the display window is 5 rows. You can set this to a larger size so that more rows are shown on the screen. Note that the minimum size always remains at 5 which means that if the system is unable to display your maximum row size it will reduce the table display until it fits.- **`DISPLAY PANDAS | GRID (PANDAS)`** - Display the results as a PANDAS dataframe (default) or as a scrollable GRID- **`RUNTIME n (1)`** - When using the timer option on a SQL statement, the statement will execute for **`n`** number of seconds. The result that is returned is the number of times the SQL statement executed rather than the execution time of the statement. The default value for runtime is one second, so if the SQL is very complex you will need to increase the run time.- **`LIST`** - Display the current settingsTo set an option use the following syntax:```%sql option option_name value option_name value ....```The following example sets all options:```%sql option maxrows 100 runtime 2 display grid maxgrid 10```The values will **not** be saved between Jupyter notebooks sessions. If you need to retrieve the current options values, use the LIST command as the only argument:```%sql option list```
###Code
def setOptions(inSQL):
global _settings, _display
cParms = inSQL.split()
cnt = 0
while cnt < len(cParms):
if cParms[cnt].upper() == 'MAXROWS':
if cnt+1 < len(cParms):
try:
_settings["maxrows"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid MAXROWS value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'MAXGRID':
if cnt+1 < len(cParms):
try:
maxgrid = int(cParms[cnt+1])
if (maxgrid <= 5): # Minimum window size is 5
maxgrid = 5
_display["maxVisibleRows"] = int(cParms[cnt+1])
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
except Exception as err:
errormsg("Invalid MAXGRID value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'RUNTIME':
if cnt+1 < len(cParms):
try:
_settings["runtime"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid RUNTIME value provided.")
pass
cnt = cnt + 1
else:
errormsg("No value provided for the RUNTIME option.")
return
elif cParms[cnt].upper() == 'DISPLAY':
if cnt+1 < len(cParms):
if (cParms[cnt+1].upper() == 'GRID'):
_settings["display"] = 'GRID'
elif (cParms[cnt+1].upper() == 'PANDAS'):
_settings["display"] = 'PANDAS'
else:
errormsg("Invalid DISPLAY value provided.")
cnt = cnt + 1
else:
errormsg("No value provided for the DISPLAY option.")
return
elif (cParms[cnt].upper() == 'LIST'):
print("(MAXROWS) Maximum number of rows displayed: " + str(_settings["maxrows"]))
print("(MAXGRID) Maximum grid display size: " + str(_settings["maxgrid"]))
print("(RUNTIME) How many seconds to a run a statement for performance testing: " + str(_settings["runtime"]))
print("(DISPLAY) Use PANDAS or GRID display format for output: " + _settings["display"])
return
else:
cnt = cnt + 1
save_settings()
###Output
_____no_output_____
###Markdown
SQL HelpThe calling format of this routine is:```sqlhelp()```This code displays help related to the %sql magic command. This help is displayed when you issue a %sql or %%sql command by itself, or use the %sql -h flag.
###Code
def sqlhelp():
global _environment
if (_environment["jupyter"] == True):
sd = '<td style="text-align:left;">'
ed1 = '</td>'
ed2 = '</td>'
sh = '<th style="text-align:left;">'
eh1 = '</th>'
eh2 = '</th>'
sr = '<tr>'
er = '</tr>'
helpSQL = """
<h3>SQL Options</h3>
<p>The following options are available as part of a SQL statement. The options are always preceded with a
minus sign (i.e. -q).
<table>
{sr}
{sh}Option{eh1}{sh}Description{eh2}
{er}
{sr}
{sd}a, all{ed1}{sd}Return all rows in answer set and do not limit display{ed2}
{er}
{sr}
{sd}d{ed1}{sd}Change SQL delimiter to "@" from ";"{ed2}
{er}
{sr}
{sd}e, echo{ed1}{sd}Echo the SQL command that was generated after macro and variable substituion.{ed2}
{er}
{sr}
{sd}h, help{ed1}{sd}Display %sql help information.{ed2}
{er}
{sr}
{sd}j{ed1}{sd}Create a pretty JSON representation. Only the first column is formatted{ed2}
{er}
{sr}
{sd}json{ed1}{sd}Retrieve the result set as a JSON record{ed2}
{er}
{sr}
{sd}pb, bar{ed1}{sd}Plot the results as a bar chart{ed2}
{er}
{sr}
{sd}pl, line{ed1}{sd}Plot the results as a line chart{ed2}
{er}
{sr}
{sd}pp, pie{ed1}{sd}Plot Pie: Plot the results as a pie chart{ed2}
{er}
{sr}
{sd}q, quiet{ed1}{sd}Quiet results - no answer set or messages returned from the function{ed2}
{er}
{sr}
{sd}r, array{ed1}{sd}Return the result set as an array of values{ed2}
{er}
{sr}
{sd}sampledata{ed1}{sd}Create and load the EMPLOYEE and DEPARTMENT tables{ed2}
{er}
{sr}
{sd}t,time{ed1}{sd}Time the following SQL statement and return the number of times it executes in 1 second{ed2}
{er}
{sr}
{sd}grid{ed1}{sd}Display the results in a scrollable grid{ed2}
{er}
</table>
"""
else:
helpSQL = """
SQL Options
The following options are available as part of a SQL statement. Options are always
preceded with a minus sign (i.e. -q).
Option Description
a, all Return all rows in answer set and do not limit display
d Change SQL delimiter to "@" from ";"
e, echo Echo the SQL command that was generated after substitution
h, help Display %sql help information
j Create a pretty JSON representation. Only the first column is formatted
json Retrieve the result set as a JSON record
pb, bar Plot the results as a bar chart
pl, line Plot the results as a line chart
pp, pie Plot Pie: Plot the results as a pie chart
q, quiet Quiet results - no answer set or messages returned from the function
r, array Return the result set as an array of values
sampledata Create and load the EMPLOYEE and DEPARTMENT tables
t,time Time the SQL statement and return the execution count per second
grid Display the results in a scrollable grid
"""
helpSQL = helpSQL.format(**locals())
if (_environment["jupyter"] == True):
pdisplay(pHTML(helpSQL))
else:
print(helpSQL)
###Output
_____no_output_____
###Markdown
Connection HelpThe calling format of this routine is:```connected_help()```This code displays help related to the CONNECT command. This code is displayed when you issue a %sql CONNECT command with no arguments or you are running a SQL statement and there isn't any connection to a database yet.
###Code
def connected_help():
sd = '<td style="text-align:left;">'
ed = '</td>'
sh = '<th style="text-align:left;">'
eh = '</th>'
sr = '<tr>'
er = '</tr>'
if (_environment['jupyter'] == True):
helpConnect = """
<h3>Connecting to Db2</h3>
<p>The CONNECT command has the following format:
<p>
<pre>
%sql CONNECT TO <database> USER <userid> USING <password|?> HOST <ip address> PORT <port number> <SSL>
%sql CONNECT CREDENTIALS <varname>
%sql CONNECT CLOSE
%sql CONNECT RESET
%sql CONNECT PROMPT - use this to be prompted for values
</pre>
<p>
If you use a "?" for the password field, the system will prompt you for a password. This avoids typing the
password as clear text on the screen. If a connection is not successful, the system will print the error
message associated with the connect request.
<p>
The <b>CREDENTIALS</b> option allows you to use credentials that are supplied by Db2 on Cloud instances.
The credentials can be supplied as a variable and if successful, the variable will be saved to disk
for future use. If you create another notebook and use the identical syntax, if the variable
is not defined, the contents on disk will be used as the credentials. You should assign the
credentials to a variable that represents the database (or schema) that you are communicating with.
Using familiar names makes it easier to remember the credentials when connecting.
<p>
<b>CONNECT CLOSE</b> will close the current connection, but will not reset the database parameters. This means that
if you issue the CONNECT command again, the system should be able to reconnect you to the database.
<p>
<b>CONNECT RESET</b> will close the current connection and remove any information on the connection. You will need
to issue a new CONNECT statement with all of the connection information.
<p>
If the connection is successful, the parameters are saved on your system and will be used the next time you
run an SQL statement, or when you issue the %sql CONNECT command with no parameters.
<p>If you issue CONNECT RESET, all of the current values will be deleted and you will need to
issue a new CONNECT statement.
<p>A CONNECT command without any parameters will attempt to re-connect to the previous database you
were using. If the connection could not be established, the program to prompt you for
the values. To cancel the connection attempt, enter a blank value for any of the values. The connection
panel will request the following values in order to connect to Db2:
<table>
{sr}
{sh}Setting{eh}
{sh}Description{eh}
{er}
{sr}
{sd}Database{ed}{sd}Database name you want to connect to.{ed}
{er}
{sr}
{sd}Hostname{ed}
{sd}Use localhost if Db2 is running on your own machine, but this can be an IP address or host name.
{er}
{sr}
{sd}PORT{ed}
{sd}The port to use for connecting to Db2. This is usually 50000.{ed}
{er}
{sr}
{sd}SSL{ed}
{sd}If you are connecting to a secure port (50001) with SSL then you must include this keyword in the connect string.{ed}
{sr}
{sd}Userid{ed}
{sd}The userid to use when connecting (usually DB2INST1){ed}
{er}
{sr}
{sd}Password{ed}
{sd}No password is provided so you have to enter a value{ed}
{er}
</table>
"""
else:
helpConnect = """\
Connecting to Db2
The CONNECT command has the following format:
%sql CONNECT TO database USER userid USING password | ?
HOST ip address PORT port number SSL
%sql CONNECT CREDENTIALS varname
%sql CONNECT CLOSE
%sql CONNECT RESET
If you use a "?" for the password field, the system will prompt you for a password.
This avoids typing the password as clear text on the screen. If a connection is
not successful, the system will print the error message associated with the connect
request.
The CREDENTIALS option allows you to use credentials that are supplied by Db2 on
Cloud instances. The credentials can be supplied as a variable and if successful,
the variable will be saved to disk for future use. If you create another notebook
and use the identical syntax, if the variable is not defined, the contents on disk
will be used as the credentials. You should assign the credentials to a variable
that represents the database (or schema) that you are communicating with. Using
familiar names makes it easier to remember the credentials when connecting.
CONNECT CLOSE will close the current connection, but will not reset the database
parameters. This means that if you issue the CONNECT command again, the system
should be able to reconnect you to the database.
CONNECT RESET will close the current connection and remove any information on the
connection. You will need to issue a new CONNECT statement with all of the connection
information.
If the connection is successful, the parameters are saved on your system and will be
used the next time you run an SQL statement, or when you issue the %sql CONNECT
command with no parameters. If you issue CONNECT RESET, all of the current values
will be deleted and you will need to issue a new CONNECT statement.
A CONNECT command without any parameters will attempt to re-connect to the previous
database you were using. If the connection could not be established, the program to
prompt you for the values. To cancel the connection attempt, enter a blank value for
any of the values. The connection panel will request the following values in order
to connect to Db2:
Setting Description
Database Database name you want to connect to
Hostname Use localhost if Db2 is running on your own machine, but this can
be an IP address or host name.
PORT The port to use for connecting to Db2. This is usually 50000.
Userid The userid to use when connecting (usually DB2INST1)
Password No password is provided so you have to enter a value
SSL Include this keyword to indicate you are connecting via SSL (usually port 50001)
"""
helpConnect = helpConnect.format(**locals())
if (_environment['jupyter'] == True):
pdisplay(pHTML(helpConnect))
else:
print(helpConnect)
###Output
_____no_output_____
###Markdown
Prompt for Connection InformationIf you are running an SQL statement and have not yet connected to a database, the %sql command will prompt you for connection information. In order to connect to a database, you must supply:- Database name - Host name (IP address or name)- Port number- Userid- Password- Secure socketThe routine is called without any parameters:```connected_prompt()```
###Code
# Prompt for Connection information
def connected_prompt():
global _settings
_database = ''
_hostname = ''
_port = ''
_uid = ''
_pwd = ''
_ssl = ''
print("Enter the database connection details (Any empty value will cancel the connection)")
_database = input("Enter the database name: ");
if (_database.strip() == ""): return False
_hostname = input("Enter the HOST IP address or symbolic name: ");
if (_hostname.strip() == ""): return False
_port = input("Enter the PORT number: ");
if (_port.strip() == ""): return False
_ssl = input("Is this a secure (SSL) port (y or n)");
if (_ssl.strip() == ""): return False
if (_ssl == "n"):
_ssl = ""
else:
_ssl = "Security=SSL;"
_uid = input("Enter Userid on the DB2 system: ").upper();
if (_uid.strip() == ""): return False
_pwd = getpass.getpass("Password [password]: ");
if (_pwd.strip() == ""): return False
_settings["database"] = _database.strip()
_settings["hostname"] = _hostname.strip()
_settings["port"] = _port.strip()
_settings["uid"] = _uid.strip()
_settings["pwd"] = _pwd.strip()
_settings["ssl"] = _ssl.strip()
_settings["maxrows"] = 10
_settings["maxgrid"] = 5
_settings["runtime"] = 1
return True
# Split port and IP addresses
def split_string(in_port,splitter=":"):
# Split input into an IP address and Port number
global _settings
checkports = in_port.split(splitter)
ip = checkports[0]
if (len(checkports) > 1):
port = checkports[1]
else:
port = None
return ip, port
###Output
_____no_output_____
###Markdown
Connect Syntax ParserThe parseConnect routine is used to parse the CONNECT command that the user issued within the %sql command. The format of the command is:```parseConnect(inSQL)```The inSQL string contains the CONNECT keyword with some additional parameters. The format of the CONNECT command is one of:```CONNECT RESETCONNECT CLOSECONNECT CREDENTIALS CONNECT TO database USER userid USING password HOST hostname PORT portnumber ```If you have credentials available from Db2 on Cloud, place the contents of the credentials into a variable and then use the `CONNECT CREDENTIALS ` syntax to connect to the database.In addition, supplying a question mark (?) for password will result in the program prompting you for the password rather than having it as clear text in your scripts.When all of the information is checked in the command, the db2_doConnect function is called to actually do the connection to the database.
###Code
# Parse the CONNECT statement and execute if possible
def parseConnect(inSQL,local_ns):
global _settings, _connected
_connected = False
cParms = inSQL.split()
cnt = 0
_settings["ssl"] = ""
while cnt < len(cParms):
if cParms[cnt].upper() == 'TO':
if cnt+1 < len(cParms):
_settings["database"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No database specified in the CONNECT statement")
return
elif cParms[cnt].upper() == "SSL":
_settings["ssl"] = "Security=SSL;"
cnt = cnt + 1
elif cParms[cnt].upper() == 'CREDENTIALS':
if cnt+1 < len(cParms):
credentials = cParms[cnt+1]
tempid = eval(credentials,local_ns)
if (isinstance(tempid,dict) == False):
errormsg("The CREDENTIALS variable (" + credentials + ") does not contain a valid Python dictionary (JSON object)")
return
if (tempid == None):
fname = credentials + ".pickle"
try:
with open(fname,'rb') as f:
_id = pickle.load(f)
except:
errormsg("Unable to find credential variable or file.")
return
else:
_id = tempid
try:
_settings["database"] = _id["db"]
_settings["hostname"] = _id["hostname"]
_settings["port"] = _id["port"]
_settings["uid"] = _id["username"]
_settings["pwd"] = _id["password"]
try:
fname = credentials + ".pickle"
with open(fname,'wb') as f:
pickle.dump(_id,f)
except:
errormsg("Failed trying to write Db2 Credentials.")
return
except:
errormsg("Credentials file is missing information. db/hostname/port/username/password required.")
return
else:
errormsg("No Credentials name supplied")
return
cnt = cnt + 1
elif cParms[cnt].upper() == 'USER':
if cnt+1 < len(cParms):
_settings["uid"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No userid specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'USING':
if cnt+1 < len(cParms):
_settings["pwd"] = cParms[cnt+1]
if (_settings["pwd"] == '?'):
_settings["pwd"] = getpass.getpass("Password [password]: ") or "password"
cnt = cnt + 1
else:
errormsg("No password specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'HOST':
if cnt+1 < len(cParms):
hostport = cParms[cnt+1].upper()
ip, port = split_string(hostport)
if (port == None): _settings["port"] = "50000"
_settings["hostname"] = ip
cnt = cnt + 1
else:
errormsg("No hostname specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PORT':
if cnt+1 < len(cParms):
_settings["port"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No port specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PROMPT':
if (connected_prompt() == False):
print("Connection canceled.")
return
else:
cnt = cnt + 1
elif cParms[cnt].upper() in ('CLOSE','RESET') :
try:
result = ibm_db.close(_hdbc)
_hdbi.close()
except:
pass
success("Connection closed.")
if cParms[cnt].upper() == 'RESET':
_settings["database"] = ''
return
else:
cnt = cnt + 1
_ = db2_doConnect()
###Output
_____no_output_____
###Markdown
Connect to Db2The db2_doConnect routine is called when a connection needs to be established to a Db2 database. The command does not require any parameters since it relies on the settings variable which contains all of the information it needs to connect to a Db2 database.```db2_doConnect()```There are 4 additional variables that are used throughout the routines to stay connected with the Db2 database. These variables are:- hdbc - The connection handle to the database- hstmt - A statement handle used for executing SQL statements- connected - A flag that tells the program whether or not we are currently connected to a database- runtime - Used to tell %sql the length of time (default 1 second) to run a statement when timing itThe only database driver that is used in this program is the IBM DB2 ODBC DRIVER. This driver needs to be loaded on the system that is connecting to Db2. The Jupyter notebook that is built by this system installs the driver for you so you shouldn't have to do anything other than build the container.If the connection is successful, the connected flag is set to True. Any subsequent %sql call will check to see if you are connected and initiate another prompted connection if you do not have a connection to a database.
###Code
def db2_doConnect():
global _hdbc, _hdbi, _connected, _runtime
global _settings
if _connected == False:
if len(_settings["database"]) == 0:
return False
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;"
"UID={3};"
"PWD={4};{5}").format(_settings["database"],
_settings["hostname"],
_settings["port"],
_settings["uid"],
_settings["pwd"],
_settings["ssl"])
# Get a database handle (hdbc) and a statement handle (hstmt) for subsequent access to DB2
try:
_hdbc = ibm_db.connect(dsn, "", "")
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
try:
_hdbi = ibm_db_dbi.Connection(_hdbc)
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
_connected = True
# Save the values for future use
save_settings()
success("Connection successful.")
return True
###Output
_____no_output_____
###Markdown
Load/Save SettingsThere are two routines that load and save settings between Jupyter notebooks. These routines are called without any parameters.```load_settings() save_settings()```There is a global structure called settings which contains the following fields:```_settings = { "maxrows" : 10, "maxgrid" : 5, "runtime" : 1, "display" : "TEXT", "database" : "", "hostname" : "localhost", "port" : "50000", "protocol" : "TCPIP", "uid" : "DB2INST1", "pwd" : "password"}```The information in the settings structure is used for re-connecting to a database when you start up a Jupyter notebook. When the session is established for the first time, the load_settings() function is called to get the contents of the pickle file (db2connect.pickle, a Jupyter session file) that will be used for the first connection to the database. Whenever a new connection is made, the file is updated with the save_settings() function.
###Code
def load_settings():
# This routine will load the settings from the previous session if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'rb') as f:
_settings = pickle.load(f)
# Reset runtime to 1 since it would be unexpected to keep the same value between connections
_settings["runtime"] = 1
_settings["maxgrid"] = 5
except:
pass
return
def save_settings():
# This routine will save the current settings if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'wb') as f:
pickle.dump(_settings,f)
except:
errormsg("Failed trying to write Db2 Configuration Information.")
return
###Output
_____no_output_____
###Markdown
Error and Message FunctionsThere are three types of messages that are thrown by the %db2 magic command. The first routine will print out a success message with no special formatting:```success(message)```The second message is used for displaying an error message that is not associated with a SQL error. This type of error message is surrounded with a red box to highlight the problem. Note that the success message has code that has been commented out that could also show a successful return code with a green box. ```errormsg(message)```The final error message is based on an error occuring in the SQL code that was executed. This code will parse the message returned from the ibm_db interface and parse it to return only the error message portion (and not all of the wrapper code from the driver).```db2_error(quiet,connect=False)```The quiet flag is passed to the db2_error routine so that messages can be suppressed if the user wishes to ignore them with the -q flag. A good example of this is dropping a table that does not exist. We know that an error will be thrown so we can ignore it. The information that the db2_error routine gets is from the stmt_errormsg() function from within the ibm_db driver. The db2_error function should only be called after a SQL failure otherwise there will be no diagnostic information returned from stmt_errormsg().If the connect flag is True, the routine will get the SQLSTATE and SQLCODE from the connection error message rather than a statement error message.
###Code
def db2_error(quiet,connect=False):
global sqlerror, sqlcode, sqlstate, _environment
try:
if (connect == False):
errmsg = ibm_db.stmt_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
else:
errmsg = ibm_db.conn_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
sqlerror = errmsg
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
except:
errmsg = "Unknown error."
sqlcode = -99999
sqlstate = "-99999"
sqlerror = errmsg
return
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
if quiet == True: return
if (errmsg == ""): return
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html+errmsg+"</p>"))
else:
print(errmsg)
# Print out an error message
def errormsg(message):
global _environment
if (message != ""):
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + message + "</p>"))
else:
print(message)
def success(message):
if (message != ""):
print(message)
return
def debug(message,error=False):
global _environment
if (_environment["jupyter"] == True):
spacer = "<br>" + " "
else:
spacer = "\n "
if (message != ""):
lines = message.split('\n')
msg = ""
indent = 0
for line in lines:
delta = line.count("(") - line.count(")")
if (msg == ""):
msg = line
indent = indent + delta
else:
if (delta < 0): indent = indent + delta
msg = msg + spacer * (indent*2) + line
if (delta > 0): indent = indent + delta
if (indent < 0): indent = 0
if (error == True):
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
else:
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#008000; background-color:#e6ffe6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + msg + "</pre></p>"))
else:
print(msg)
return
###Output
_____no_output_____
###Markdown
Macro ProcessorA macro is used to generate SQL to be executed by overriding or creating a new keyword. For instance, the base `%sql` command does not understand the `LIST TABLES` command which is usually used in conjunction with the `CLP` processor. Rather than specifically code this in the base `db2.ipynb` file, we can create a macro that can execute this code for us.There are three routines that deal with macros. - checkMacro is used to find the macro calls in a string. All macros are sent to parseMacro for checking.- runMacro will evaluate the macro and return the string to the parse- subvars is used to track the variables used as part of a macro call.- setMacro is used to catalog a macro Set MacroThis code will catalog a macro call.
###Code
def setMacro(inSQL,parms):
global _macros
names = parms.split()
if (len(names) < 2):
errormsg("No command name supplied.")
return None
macroName = names[1].upper()
_macros[macroName] = inSQL
return
###Output
_____no_output_____
###Markdown
Check MacroThis code will check to see if there is a macro command in the SQL. It will take the SQL that is supplied and strip out three values: the first and second keywords, and the remainder of the parameters.For instance, consider the following statement:```CREATE DATABASE GEORGE options....```The name of the macro that we want to run is called `CREATE`. We know that there is a SQL command called `CREATE` but this code will call the macro first to see if needs to run any special code. For instance, `CREATE DATABASE` is not part of the `db2.ipynb` syntax, but we can add it in by using a macro.The check macro logic will strip out the subcommand (`DATABASE`) and place the remainder of the string after `DATABASE` in options.
###Code
def checkMacro(in_sql):
global _macros
if (len(in_sql) == 0): return(in_sql) # Nothing to do
tokens = parseArgs(in_sql,None) # Take the string and reduce into tokens
macro_name = tokens[0].upper() # Uppercase the name of the token
if (macro_name not in _macros):
return(in_sql) # No macro by this name so just return the string
result = runMacro(_macros[macro_name],in_sql,tokens) # Execute the macro using the tokens we found
return(result) # Runmacro will either return the original SQL or the new one
###Output
_____no_output_____
###Markdown
Split AssignmentThis routine will return the name of a variable and it's value when the format is x=y. If y is enclosed in quotes, the quotes are removed.
###Code
def splitassign(arg):
var_name = "null"
var_value = "null"
arg = arg.strip()
eq = arg.find("=")
if (eq != -1):
var_name = arg[:eq].strip()
temp_value = arg[eq+1:].strip()
if (temp_value != ""):
ch = temp_value[0]
if (ch in ["'",'"']):
if (temp_value[-1:] == ch):
var_value = temp_value[1:-1]
else:
var_value = temp_value
else:
var_value = temp_value
else:
var_value = arg
return var_name, var_value
###Output
_____no_output_____
###Markdown
Parse Args The commands that are used in the macros need to be parsed into their separate tokens. The tokens are separated by blanks and strings that enclosed in quotes are kept together.
###Code
def parseArgs(argin,_vars):
quoteChar = ""
inQuote = False
inArg = True
args = []
arg = ''
for ch in argin.lstrip():
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
arg = arg + ch #z
else:
arg = arg + ch
elif (ch == "\"" or ch == "\'"): # Do we have a quote
quoteChar = ch
arg = arg + ch #z
inQuote = True
elif (ch == " "):
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
else:
args.append("null")
arg = ""
else:
arg = arg + ch
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
return(args)
###Output
_____no_output_____
###Markdown
Run MacroThis code will execute the body of the macro and return the results for that macro call.
###Code
def runMacro(script,in_sql,tokens):
result = ""
runIT = True
code = script.split("\n")
level = 0
runlevel = [True,False,False,False,False,False,False,False,False,False]
ifcount = 0
_vars = {}
for i in range(0,len(tokens)):
vstr = str(i)
_vars[vstr] = tokens[i]
if (len(tokens) == 0):
_vars["argc"] = "0"
else:
_vars["argc"] = str(len(tokens)-1)
for line in code:
line = line.strip()
if (line == "" or line == "\n"): continue
if (line[0] == "#"): continue # A comment line starts with a # in the first position of the line
args = parseArgs(line,_vars) # Get all of the arguments
if (args[0] == "if"):
ifcount = ifcount + 1
if (runlevel[level] == False): # You can't execute this statement
continue
level = level + 1
if (len(args) < 4):
print("Macro: Incorrect number of arguments for the if clause.")
return insql
arg1 = args[1]
arg2 = args[3]
if (len(arg2) > 2):
ch1 = arg2[0]
ch2 = arg2[-1:]
if (ch1 in ['"',"'"] and ch1 == ch2):
arg2 = arg2[1:-1].strip()
op = args[2]
if (op in ["=","=="]):
if (arg1 == arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<=","=<"]):
if (arg1 <= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">=","=>"]):
if (arg1 >= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<>","!="]):
if (arg1 != arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<"]):
if (arg1 < arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">"]):
if (arg1 > arg2):
runlevel[level] = True
else:
runlevel[level] = False
else:
print("Macro: Unknown comparison operator in the if statement:" + op)
continue
elif (args[0] in ["exit","echo"] and runlevel[level] == True):
msg = ""
for msgline in args[1:]:
if (msg == ""):
msg = subvars(msgline,_vars)
else:
msg = msg + " " + subvars(msgline,_vars)
if (msg != ""):
if (args[0] == "echo"):
debug(msg,error=False)
else:
debug(msg,error=True)
if (args[0] == "exit"): return ''
elif (args[0] == "pass" and runlevel[level] == True):
pass
elif (args[0] == "var" and runlevel[level] == True):
value = ""
for val in args[2:]:
if (value == ""):
value = subvars(val,_vars)
else:
value = value + " " + subvars(val,_vars)
value.strip()
_vars[args[1]] = value
elif (args[0] == 'else'):
if (ifcount == level):
runlevel[level] = not runlevel[level]
elif (args[0] == 'return' and runlevel[level] == True):
return(result)
elif (args[0] == "endif"):
ifcount = ifcount - 1
if (ifcount < level):
level = level - 1
if (level < 0):
print("Macro: Unmatched if/endif pairs.")
return ''
else:
if (runlevel[level] == True):
if (result == ""):
result = subvars(line,_vars)
else:
result = result + "\n" + subvars(line,_vars)
return(result)
###Output
_____no_output_____
###Markdown
Substitute VarsThis routine is used by the runMacro program to track variables that are used within Macros. These are kept separate from the rest of the code.
###Code
def subvars(script,_vars):
if (_vars == None): return script
remainder = script
result = ""
done = False
while done == False:
bv = remainder.find("{")
if (bv == -1):
done = True
continue
ev = remainder.find("}")
if (ev == -1):
done = True
continue
result = result + remainder[:bv]
vvar = remainder[bv+1:ev]
remainder = remainder[ev+1:]
upper = False
allvars = False
if (vvar[0] == "^"):
upper = True
vvar = vvar[1:]
elif (vvar[0] == "*"):
vvar = vvar[1:]
allvars = True
else:
pass
if (vvar in _vars):
if (upper == True):
items = _vars[vvar].upper()
elif (allvars == True):
try:
iVar = int(vvar)
except:
return(script)
items = ""
sVar = str(iVar)
while sVar in _vars:
if (items == ""):
items = _vars[sVar]
else:
items = items + " " + _vars[sVar]
iVar = iVar + 1
sVar = str(iVar)
else:
items = _vars[vvar]
else:
if (allvars == True):
items = ""
else:
items = "null"
result = result + items
if (remainder != ""):
result = result + remainder
return(result)
###Output
_____no_output_____
###Markdown
SQL TimerThe calling format of this routine is:```count = sqlTimer(hdbc, runtime, inSQL)```This code runs the SQL string multiple times for one second (by default). The accuracy of the clock is not that great when you are running just one statement, so instead this routine will run the code multiple times for a second to give you an execution count. If you need to run the code for more than one second, the runtime value needs to be set to the number of seconds you want the code to run.The return result is always the number of times that the code executed. Note, that the program will skip reading the data if it is a SELECT statement so it doesn't included fetch time for the answer set.
###Code
def sqlTimer(hdbc, runtime, inSQL):
count = 0
t_end = time.time() + runtime
while time.time() < t_end:
try:
stmt = ibm_db.exec_immediate(hdbc,inSQL)
if (stmt == False):
db2_error(flag(["-q","-quiet"]))
return(-1)
ibm_db.free_result(stmt)
except Exception as err:
db2_error(False)
return(-1)
count = count + 1
return(count)
###Output
_____no_output_____
###Markdown
Split ArgsThis routine takes as an argument a string and then splits the arguments according to the following logic:* If the string starts with a `(` character, it will check the last character in the string and see if it is a `)` and then remove those characters* Every parameter is separated by a comma `,` and commas within quotes are ignored* Each parameter returned will have three values returned - one for the value itself, an indicator which will be either True if it was quoted, or False if not, and True or False if it is numeric.Example:``` "abcdef",abcdef,456,"856"```Three values would be returned:```[abcdef,True,False],[abcdef,False,False],[456,False,True],[856,True,False]```Any quoted string will be False for numeric. The way that the parameters are handled are up to the calling program. However, in the case of Db2, the quoted strings must be in single quotes so any quoted parameter using the double quotes `"` must be wrapped with single quotes. There is always a possibility that a string contains single quotes (i.e. O'Connor) so any substituted text should use `''` so that Db2 can properly interpret the string. This routine does not adjust the strings with quotes, and depends on the variable subtitution routine to do that.
###Code
def splitargs(arguments):
import types
# String the string and remove the ( and ) characters if they at the beginning and end of the string
results = []
step1 = arguments.strip()
if (len(step1) == 0): return(results) # Not much to do here - no args found
if (step1[0] == '('):
if (step1[-1:] == ')'):
step2 = step1[1:-1]
step2 = step2.strip()
else:
step2 = step1
else:
step2 = step1
# Now we have a string without brackets. Start scanning for commas
quoteCH = ""
pos = 0
arg = ""
args = []
while pos < len(step2):
ch = step2[pos]
if (quoteCH == ""): # Are we in a quote?
if (ch in ('"',"'")): # Check to see if we are starting a quote
quoteCH = ch
arg = arg + ch
pos += 1
elif (ch == ","): # Are we at the end of a parameter?
arg = arg.strip()
args.append(arg)
arg = ""
inarg = False
pos += 1
else: # Continue collecting the string
arg = arg + ch
pos += 1
else:
if (ch == quoteCH): # Are we at the end of a quote?
arg = arg + ch # Add the quote to the string
pos += 1 # Increment past the quote
quoteCH = "" # Stop quote checking (maybe!)
else:
pos += 1
arg = arg + ch
if (quoteCH != ""): # So we didn't end our string
arg = arg.strip()
args.append(arg)
elif (arg != ""): # Something left over as an argument
arg = arg.strip()
args.append(arg)
else:
pass
results = []
for arg in args:
result = []
if (len(arg) > 0):
if (arg[0] in ('"',"'")):
value = arg[1:-1]
isString = True
isNumber = False
else:
isString = False
isNumber = False
try:
value = eval(arg)
if (type(value) == int):
isNumber = True
elif (isinstance(value,float) == True):
isNumber = True
else:
value = arg
except:
value = arg
else:
value = ""
isString = False
isNumber = False
result = [value,isString,isNumber]
results.append(result)
return results
###Output
_____no_output_____
###Markdown
SQL ParserThe calling format of this routine is:```sql_cmd, parameter_list, encoded_sql = sqlParser(sql_input)```This code will look at the SQL string that has been passed to it and parse it into four values:- sql_cmd: First command in the list (so this may not be the actual SQL command)- parameter_list: the values of the parameters that need to passed to the execute/pandas code- encoded_sql: SQL with the parameters removed if there are any (replaced with ? markers)
###Code
def sqlParser(sqlin,local_ns):
sql_cmd = ""
encoded_sql = sqlin
firstCommand = "(?:^\s*)([a-zA-Z]+)(?:\s+.*|$)"
findFirst = re.match(firstCommand,sqlin)
if (findFirst == None): # We did not find a match so we just return the empty string
return sql_cmd, encoded_sql
cmd = findFirst.group(1)
sql_cmd = cmd.upper()
#
# Scan the input string looking for variables in the format :var. If no : is found just return.
# Var must be alpha+number+_ to be valid
#
if (':' not in sqlin): # A quick check to see if parameters are in here, but not fool-proof!
return sql_cmd, encoded_sql
inVar = False
inQuote = ""
varName = ""
encoded_sql = ""
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
for ch in sqlin:
if (inVar == True): # We are collecting the name of a variable
if (ch.upper() in "@_ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789[]"):
varName = varName + ch
continue
else:
if (varName == ""):
encode_sql = encoded_sql + ":"
elif (varName[0] in ('[',']')):
encoded_sql = encoded_sql + ":" + varName
else:
if (ch == '.'): # If the variable name is stopped by a period, assume no quotes are used
flag_quotes = False
else:
flag_quotes = True
varValue, varType = getContents(varName,flag_quotes,local_ns)
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == RAW):
encoded_sql = encoded_sql + varValue
elif (varType == LIST):
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
flag_quotes = True
try:
if (v.find('0x') == 0): # Just guessing this is a hex value at beginning
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
encoded_sql = encoded_sql + ch
varName = ""
inVar = False
elif (inQuote != ""):
encoded_sql = encoded_sql + ch
if (ch == inQuote): inQuote = ""
elif (ch in ("'",'"')):
encoded_sql = encoded_sql + ch
inQuote = ch
elif (ch == ":"): # This might be a variable
varName = ""
inVar = True
else:
encoded_sql = encoded_sql + ch
if (inVar == True):
varValue, varType = getContents(varName,True,local_ns) # We assume the end of a line is quoted
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == LIST):
flag_quotes = True
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
try:
if (v.find('0x') == 0): # Just guessing this is a hex value
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
return sql_cmd, encoded_sql
###Output
_____no_output_____
###Markdown
Variable Contents FunctionThe calling format of this routine is:```value = getContents(varName,quote,name_space)```This code will take the name of a variable as input and return the contents of that variable. If the variable is not found then the program will return None which is the equivalent to empty or null. Note that this function looks at the global variable pool for Python so it is possible that the wrong version of variable is returned if it is used in different functions. For this reason, any variables used in SQL statements should use a unique namimg convention if possible.The other thing that this function does is replace single quotes with two quotes. The reason for doing this is that Db2 will convert two single quotes into one quote when dealing with strings. This avoids problems when dealing with text that contains multiple quotes within the string. Note that this substitution is done only for single quote characters since the double quote character is used by Db2 for naming columns that are case sensitive or contain special characters.If the quote value is True, the field will have quotes around it. The name_space is the variables currently that are registered in Python.
###Code
def getContents(varName,flag_quotes,local_ns):
#
# Get the contents of the variable name that is passed to the routine. Only simple
# variables are checked, i.e. arrays and lists are not parsed
#
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
DICT = 4
try:
value = eval(varName,None,local_ns) # globals()[varName] # eval(varName)
except:
return(None,STRING)
if (isinstance(value,dict) == True): # Check to see if this is JSON dictionary
return(addquotes(value,flag_quotes),STRING)
elif(isinstance(value,list) == True): # List - tricky
return(value,LIST)
elif (isinstance(value,int) == True): # Integer value
return(value,NUMBER)
elif (isinstance(value,float) == True): # Float value
return(value,NUMBER)
else:
try:
# The pattern needs to be in the first position (0 in Python terms)
if (value.find('0x') == 0): # Just guessing this is a hex value
return(value,RAW)
else:
return(addquotes(value,flag_quotes),STRING) # String
except:
return(addquotes(str(value),flag_quotes),RAW)
###Output
_____no_output_____
###Markdown
Add QuotesQuotes are a challenge when dealing with dictionaries and Db2. Db2 wants strings delimited with single quotes, while Dictionaries use double quotes. That wouldn't be a problems except imbedded single quotes within these dictionaries will cause things to fail. This routine attempts to double-quote the single quotes within the dicitonary.
###Code
def addquotes(inString,flag_quotes):
if (isinstance(inString,dict) == True): # Check to see if this is JSON dictionary
serialized = json.dumps(inString)
else:
serialized = inString
# Replace single quotes with '' (two quotes) and wrap everything in single quotes
if (flag_quotes == False):
return(serialized)
else:
return("'"+serialized.replace("'","''")+"'") # Convert single quotes to two single quotes
###Output
_____no_output_____
###Markdown
Create the SAMPLE Database TablesThe calling format of this routine is:```db2_create_sample(quiet)```There are a lot of examples that depend on the data within the SAMPLE database. If you are running these examples and the connection is not to the SAMPLE database, then this code will create the two (EMPLOYEE, DEPARTMENT) tables that are used by most examples. If the function finds that these tables already exist, then nothing is done. If the tables are missing then they will be created with the same data as in the SAMPLE database.The quiet flag tells the program not to print any messages when the creation of the tables is complete.
###Code
def db2_create_sample(quiet):
create_department = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='DEPARTMENT' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE DEPARTMENT(DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6),ADMRDEPT CHAR(3) NOT NULL)');
EXECUTE IMMEDIATE('INSERT INTO DEPARTMENT VALUES
(''A00'',''SPIFFY COMPUTER SERVICE DIV.'',''000010'',''A00''),
(''B01'',''PLANNING'',''000020'',''A00''),
(''C01'',''INFORMATION CENTER'',''000030'',''A00''),
(''D01'',''DEVELOPMENT CENTER'',NULL,''A00''),
(''D11'',''MANUFACTURING SYSTEMS'',''000060'',''D01''),
(''D21'',''ADMINISTRATION SYSTEMS'',''000070'',''D01''),
(''E01'',''SUPPORT SERVICES'',''000050'',''A00''),
(''E11'',''OPERATIONS'',''000090'',''E01''),
(''E21'',''SOFTWARE SUPPORT'',''000100'',''E01''),
(''F22'',''BRANCH OFFICE F2'',NULL,''E01''),
(''G22'',''BRANCH OFFICE G2'',NULL,''E01''),
(''H22'',''BRANCH OFFICE H2'',NULL,''E01''),
(''I22'',''BRANCH OFFICE I2'',NULL,''E01''),
(''J22'',''BRANCH OFFICE J2'',NULL,''E01'')');
END IF;
END"""
%sql -d -q {create_department}
create_employee = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='EMPLOYEE' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE EMPLOYEE(
EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1),
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3),
PHONENO CHAR(4),
HIREDATE DATE,
JOB CHAR(8),
EDLEVEL SMALLINT NOT NULL,
SEX CHAR(1),
BIRTHDATE DATE,
SALARY DECIMAL(9,2),
BONUS DECIMAL(9,2),
COMM DECIMAL(9,2)
)');
EXECUTE IMMEDIATE('INSERT INTO EMPLOYEE VALUES
(''000010'',''CHRISTINE'',''I'',''HAAS'' ,''A00'',''3978'',''1995-01-01'',''PRES '',18,''F'',''1963-08-24'',152750.00,1000.00,4220.00),
(''000020'',''MICHAEL'' ,''L'',''THOMPSON'' ,''B01'',''3476'',''2003-10-10'',''MANAGER '',18,''M'',''1978-02-02'',94250.00,800.00,3300.00),
(''000030'',''SALLY'' ,''A'',''KWAN'' ,''C01'',''4738'',''2005-04-05'',''MANAGER '',20,''F'',''1971-05-11'',98250.00,800.00,3060.00),
(''000050'',''JOHN'' ,''B'',''GEYER'' ,''E01'',''6789'',''1979-08-17'',''MANAGER '',16,''M'',''1955-09-15'',80175.00,800.00,3214.00),
(''000060'',''IRVING'' ,''F'',''STERN'' ,''D11'',''6423'',''2003-09-14'',''MANAGER '',16,''M'',''1975-07-07'',72250.00,500.00,2580.00),
(''000070'',''EVA'' ,''D'',''PULASKI'' ,''D21'',''7831'',''2005-09-30'',''MANAGER '',16,''F'',''2003-05-26'',96170.00,700.00,2893.00),
(''000090'',''EILEEN'' ,''W'',''HENDERSON'' ,''E11'',''5498'',''2000-08-15'',''MANAGER '',16,''F'',''1971-05-15'',89750.00,600.00,2380.00),
(''000100'',''THEODORE'' ,''Q'',''SPENSER'' ,''E21'',''0972'',''2000-06-19'',''MANAGER '',14,''M'',''1980-12-18'',86150.00,500.00,2092.00),
(''000110'',''VINCENZO'' ,''G'',''LUCCHESSI'' ,''A00'',''3490'',''1988-05-16'',''SALESREP'',19,''M'',''1959-11-05'',66500.00,900.00,3720.00),
(''000120'',''SEAN'' ,'' '',''O`CONNELL'' ,''A00'',''2167'',''1993-12-05'',''CLERK '',14,''M'',''1972-10-18'',49250.00,600.00,2340.00),
(''000130'',''DELORES'' ,''M'',''QUINTANA'' ,''C01'',''4578'',''2001-07-28'',''ANALYST '',16,''F'',''1955-09-15'',73800.00,500.00,1904.00),
(''000140'',''HEATHER'' ,''A'',''NICHOLLS'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''000150'',''BRUCE'' ,'' '',''ADAMSON'' ,''D11'',''4510'',''2002-02-12'',''DESIGNER'',16,''M'',''1977-05-17'',55280.00,500.00,2022.00),
(''000160'',''ELIZABETH'',''R'',''PIANKA'' ,''D11'',''3782'',''2006-10-11'',''DESIGNER'',17,''F'',''1980-04-12'',62250.00,400.00,1780.00),
(''000170'',''MASATOSHI'',''J'',''YOSHIMURA'' ,''D11'',''2890'',''1999-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',44680.00,500.00,1974.00),
(''000180'',''MARILYN'' ,''S'',''SCOUTTEN'' ,''D11'',''1682'',''2003-07-07'',''DESIGNER'',17,''F'',''1979-02-21'',51340.00,500.00,1707.00),
(''000190'',''JAMES'' ,''H'',''WALKER'' ,''D11'',''2986'',''2004-07-26'',''DESIGNER'',16,''M'',''1982-06-25'',50450.00,400.00,1636.00),
(''000200'',''DAVID'' ,'' '',''BROWN'' ,''D11'',''4501'',''2002-03-03'',''DESIGNER'',16,''M'',''1971-05-29'',57740.00,600.00,2217.00),
(''000210'',''WILLIAM'' ,''T'',''JONES'' ,''D11'',''0942'',''1998-04-11'',''DESIGNER'',17,''M'',''2003-02-23'',68270.00,400.00,1462.00),
(''000220'',''JENNIFER'' ,''K'',''LUTZ'' ,''D11'',''0672'',''1998-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',49840.00,600.00,2387.00),
(''000230'',''JAMES'' ,''J'',''JEFFERSON'' ,''D21'',''2094'',''1996-11-21'',''CLERK '',14,''M'',''1980-05-30'',42180.00,400.00,1774.00),
(''000240'',''SALVATORE'',''M'',''MARINO'' ,''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''2002-03-31'',48760.00,600.00,2301.00),
(''000250'',''DANIEL'' ,''S'',''SMITH'' ,''D21'',''0961'',''1999-10-30'',''CLERK '',15,''M'',''1969-11-12'',49180.00,400.00,1534.00),
(''000260'',''SYBIL'' ,''P'',''JOHNSON'' ,''D21'',''8953'',''2005-09-11'',''CLERK '',16,''F'',''1976-10-05'',47250.00,300.00,1380.00),
(''000270'',''MARIA'' ,''L'',''PEREZ'' ,''D21'',''9001'',''2006-09-30'',''CLERK '',15,''F'',''2003-05-26'',37380.00,500.00,2190.00),
(''000280'',''ETHEL'' ,''R'',''SCHNEIDER'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1976-03-28'',36250.00,500.00,2100.00),
(''000290'',''JOHN'' ,''R'',''PARKER'' ,''E11'',''4502'',''2006-05-30'',''OPERATOR'',12,''M'',''1985-07-09'',35340.00,300.00,1227.00),
(''000300'',''PHILIP'' ,''X'',''SMITH'' ,''E11'',''2095'',''2002-06-19'',''OPERATOR'',14,''M'',''1976-10-27'',37750.00,400.00,1420.00),
(''000310'',''MAUDE'' ,''F'',''SETRIGHT'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''000320'',''RAMLAL'' ,''V'',''MEHTA'' ,''E21'',''9990'',''1995-07-07'',''FIELDREP'',16,''M'',''1962-08-11'',39950.00,400.00,1596.00),
(''000330'',''WING'' ,'' '',''LEE'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''M'',''1971-07-18'',45370.00,500.00,2030.00),
(''000340'',''JASON'' ,''R'',''GOUNOT'' ,''E21'',''5698'',''1977-05-05'',''FIELDREP'',16,''M'',''1956-05-17'',43840.00,500.00,1907.00),
(''200010'',''DIAN'' ,''J'',''HEMMINGER'' ,''A00'',''3978'',''1995-01-01'',''SALESREP'',18,''F'',''1973-08-14'',46500.00,1000.00,4220.00),
(''200120'',''GREG'' ,'' '',''ORLANDO'' ,''A00'',''2167'',''2002-05-05'',''CLERK '',14,''M'',''1972-10-18'',39250.00,600.00,2340.00),
(''200140'',''KIM'' ,''N'',''NATZ'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''200170'',''KIYOSHI'' ,'' '',''YAMAMOTO'' ,''D11'',''2890'',''2005-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',64680.00,500.00,1974.00),
(''200220'',''REBA'' ,''K'',''JOHN'' ,''D11'',''0672'',''2005-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',69840.00,600.00,2387.00),
(''200240'',''ROBERT'' ,''M'',''MONTEVERDE'',''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''1984-03-31'',37760.00,600.00,2301.00),
(''200280'',''EILEEN'' ,''R'',''SCHWARTZ'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1966-03-28'',46250.00,500.00,2100.00),
(''200310'',''MICHELLE'' ,''F'',''SPRINGER'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''200330'',''HELENA'' ,'' '',''WONG'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''F'',''1971-07-18'',35370.00,500.00,2030.00),
(''200340'',''ROY'' ,''R'',''ALONZO'' ,''E21'',''5698'',''1997-07-05'',''FIELDREP'',16,''M'',''1956-05-17'',31840.00,500.00,1907.00)');
END IF;
END"""
%sql -d -q {create_employee}
if (quiet == False): success("Sample tables [EMPLOYEE, DEPARTMENT] created.")
###Output
_____no_output_____
###Markdown
Check optionThis function will return the original string with the option removed, and a flag or true or false of the value is found.```args, flag = checkOption(option_string, option, false_value, true_value)```Options are specified with a -x where x is the character that we are searching for. It may actually be more than one character long like -pb/-pi/etc... The false and true values are optional. By default these are the boolean values of T/F but for some options it could be a character string like ';' versus '@' for delimiters.
###Code
def checkOption(args_in, option, vFalse=False, vTrue=True):
args_out = args_in.strip()
found = vFalse
if (args_out != ""):
if (args_out.find(option) >= 0):
args_out = args_out.replace(option," ")
args_out = args_out.strip()
found = vTrue
return args_out, found
###Output
_____no_output_____
###Markdown
Plot DataThis function will plot the data that is returned from the answer set. The plot value determines how we display the data. 1=Bar, 2=Pie, 3=Line, 4=Interactive.```plotData(flag_plot, hdbi, sql, parms)```The hdbi is the ibm_db_sa handle that is used by pandas dataframes to run the sql. The parms contains any of the parameters required to run the query.
###Code
def plotData(hdbi, sql):
try:
df = pandas.read_sql(sql,hdbi)
except Exception as err:
db2_error(False)
return
if df.empty:
errormsg("No results returned")
return
col_count = len(df.columns)
if flag(["-pb","-bar"]): # Plot 1 = bar chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='bar');
_ = plt.plot();
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
df.plot(kind='bar',x=xlabel,y=ylabel);
_ = plt.plot();
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot.bar();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pp","-pie"]): # Plot 2 = pie chart
if (col_count in (1,2)):
if (col_count == 1):
df.index = df.index + 1
yname = df.columns.values[0]
_ = df.plot(kind='pie',y=yname);
else:
xlabel = df.columns.values[0]
xname = df[xlabel].tolist()
yname = df.columns.values[1]
_ = df.plot(kind='pie',y=yname,labels=xname);
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pl","-line"]): # Plot 3 = line chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='line');
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
_ = df.plot(kind='line',x=xlabel,y=ylabel) ;
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot();
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
else:
return
###Output
_____no_output_____
###Markdown
Find a ProcedureThis routine will check to see if a procedure exists with the SCHEMA/NAME (or just NAME if no schema is supplied) and returns the number of answer sets returned. Possible values are 0, 1 (or greater) or None. If None is returned then we can't find the procedure anywhere.
###Code
def findProc(procname):
global _hdbc, _hdbi, _connected, _runtime
# Split the procedure name into schema.procname if appropriate
upper_procname = procname.upper()
schema, proc = split_string(upper_procname,".") # Expect schema.procname
if (proc == None):
proc = schema
# Call ibm_db.procedures to see if the procedure does exist
schema = "%"
try:
stmt = ibm_db.procedures(_hdbc, None, schema, proc)
if (stmt == False): # Error executing the code
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
result = ibm_db.fetch_tuple(stmt)
resultsets = result[5]
if (resultsets >= 1): resultsets = 1
return resultsets
except Exception as err:
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
###Output
_____no_output_____
###Markdown
Parse Call ArgumentsThis code will parse a SQL call name(parm1,...) and return the name and the parameters in the call.
###Code
def parseCallArgs(macro):
quoteChar = ""
inQuote = False
inParm = False
ignore = False
name = ""
parms = []
parm = ''
sqlin = macro.replace("\n","")
sqlin.lstrip()
for ch in sqlin:
if (inParm == False):
# We hit a blank in the name, so ignore everything after the procedure name until a ( is found
if (ch == " "):
ignore == True
elif (ch == "("): # Now we have parameters to send to the stored procedure
inParm = True
else:
if (ignore == False): name = name + ch # The name of the procedure (and no blanks)
else:
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
else:
parm = parm + ch
elif (ch in ("\"","\'","[")): # Do we have a quote
if (ch == "["):
quoteChar = "]"
else:
quoteChar = ch
inQuote = True
elif (ch == ")"):
if (parm != ""):
parms.append(parm)
parm = ""
break
elif (ch == ","):
if (parm != ""):
parms.append(parm)
else:
parms.append("null")
parm = ""
else:
parm = parm + ch
if (inParm == True):
if (parm != ""):
parms.append(parm_value)
return(name,parms)
###Output
_____no_output_____
###Markdown
Get ColumnsGiven a statement handle, determine what the column names are or the data types.
###Code
def getColumns(stmt):
columns = []
types = []
colcount = 0
try:
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
while (colname != False):
columns.append(colname)
types.append(coltype)
colcount += 1
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
return columns,types
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Call a ProcedureThe CALL statement is used for execution of a stored procedure. The format of the CALL statement is:```CALL PROC_NAME(x,y,z,...)```Procedures allow for the return of answer sets (cursors) as well as changing the contents of the parameters being passed to the procedure. In this implementation, the CALL function is limited to returning one answer set (or nothing). If you want to use more complex stored procedures then you will have to use the native python libraries.
###Code
def parseCall(hdbc, inSQL, local_ns):
global _hdbc, _hdbi, _connected, _runtime, _environment
# Check to see if we are connected first
if (_connected == False): # Check if you are connected
db2_doConnect()
if _connected == False: return None
remainder = inSQL.strip()
procName, procArgs = parseCallArgs(remainder[5:]) # Assume that CALL ... is the format
resultsets = findProc(procName)
if (resultsets == None): return None
argvalues = []
if (len(procArgs) > 0): # We have arguments to consider
for arg in procArgs:
varname = arg
if (len(varname) > 0):
if (varname[0] == ":"):
checkvar = varname[1:]
varvalue = getContents(checkvar,True,local_ns)
if (varvalue == None):
errormsg("Variable " + checkvar + " is not defined.")
return None
argvalues.append(varvalue)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
else:
argvalues.append(None)
try:
if (len(procArgs) > 0):
argtuple = tuple(argvalues)
result = ibm_db.callproc(_hdbc,procName,argtuple)
stmt = result[0]
else:
result = ibm_db.callproc(_hdbc,procName)
stmt = result
if (resultsets != 0 and stmt != None):
columns, types = getColumns(stmt)
if (columns == None): return None
rows = []
try:
rowlist = ibm_db.fetch_tuple(stmt)
except:
rowlist = None
while ( rowlist ) :
row = []
colcount = 0
for col in rowlist:
try:
if (types[colcount] in ["int","bigint"]):
row.append(int(col))
elif (types[colcount] in ["decimal","real"]):
row.append(float(col))
elif (types[colcount] in ["date","time","timestamp"]):
row.append(str(col))
else:
row.append(col)
except:
row.append(col)
colcount += 1
rows.append(row)
try:
rowlist = ibm_db.fetch_tuple(stmt)
except:
rowlist = None
if flag(["-r","-array"]):
rows.insert(0,columns)
if len(procArgs) > 0:
allresults = []
allresults.append(rows)
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return rows
else:
df = pandas.DataFrame.from_records(rows,columns=columns)
if flag("-grid") or _settings['display'] == 'GRID':
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
return df
else:
if len(procArgs) > 0:
allresults = []
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return None
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Parse Prepare/ExecuteThe PREPARE statement is used for repeated execution of a SQL statement. The PREPARE statement has the format:```stmt = PREPARE SELECT EMPNO FROM EMPLOYEE WHERE WORKDEPT=? AND SALARY<?```The SQL statement that you want executed is placed after the PREPARE statement with the location of variables marked with ? (parameter) markers. The variable stmt contains the prepared statement that need to be passed to the EXECUTE statement. The EXECUTE statement has the format:```EXECUTE :x USING z, y, s ```The first variable (:x) is the name of the variable that you assigned the results of the prepare statement. The values after the USING clause are substituted into the prepare statement where the ? markers are found. If the values in USING clause are variable names (z, y, s), a **link** is created to these variables as part of the execute statement. If you use the variable subsitution form of variable name (:z, :y, :s), the **contents** of the variable are placed into the USING clause. Normally this would not make much of a difference except when you are dealing with binary strings or JSON strings where the quote characters may cause some problems when subsituted into the statement.
###Code
def parsePExec(hdbc, inSQL):
import ibm_db
global _stmt, _stmtID, _stmtSQL, sqlcode
cParms = inSQL.split()
parmCount = len(cParms)
if (parmCount == 0): return(None) # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "PREPARE"): # Prepare the following SQL
uSQL = inSQL.upper()
found = uSQL.find("PREPARE")
sql = inSQL[found+7:].strip()
try:
pattern = "\?\*[0-9]+"
findparm = re.search(pattern,sql)
while findparm != None:
found = findparm.group(0)
count = int(found[2:])
markers = ('?,' * count)[:-1]
sql = sql.replace(found,markers)
findparm = re.search(pattern,sql)
stmt = ibm_db.prepare(hdbc,sql) # Check error code here
if (stmt == False):
db2_error(False)
return(False)
stmttext = str(stmt).strip()
stmtID = stmttext[33:48].strip()
if (stmtID in _stmtID) == False:
_stmt.append(stmt) # Prepare and return STMT to caller
_stmtID.append(stmtID)
else:
stmtIX = _stmtID.index(stmtID)
_stmt[stmtiX] = stmt
return(stmtID)
except Exception as err:
print(err)
db2_error(False)
return(False)
if (keyword == "EXECUTE"): # Execute the prepare statement
if (parmCount < 2): return(False) # No stmtID available
stmtID = cParms[1].strip()
if (stmtID in _stmtID) == False:
errormsg("Prepared statement not found or invalid.")
return(False)
stmtIX = _stmtID.index(stmtID)
stmt = _stmt[stmtIX]
try:
if (parmCount == 2): # Only the statement handle available
result = ibm_db.execute(stmt) # Run it
elif (parmCount == 3): # Not quite enough arguments
errormsg("Missing or invalid USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
else:
using = cParms[2].upper()
if (using != "USING"): # Bad syntax again
errormsg("Missing USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
uSQL = inSQL.upper()
found = uSQL.find("USING")
parmString = inSQL[found+5:].strip()
parmset = splitargs(parmString)
if (len(parmset) == 0):
errormsg("Missing parameters after the USING clause.")
sqlcode = -99999
return(False)
parms = []
parm_count = 0
CONSTANT = 0
VARIABLE = 1
const = [0]
const_cnt = 0
for v in parmset:
parm_count = parm_count + 1
if (v[1] == True or v[2] == True): # v[1] true if string, v[2] true if num
parm_type = CONSTANT
const_cnt = const_cnt + 1
if (v[2] == True):
if (isinstance(v[0],int) == True): # Integer value
sql_type = ibm_db.SQL_INTEGER
elif (isinstance(v[0],float) == True): # Float value
sql_type = ibm_db.SQL_DOUBLE
else:
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
const.append(v[0])
else:
parm_type = VARIABLE
# See if the variable has a type associated with it varname@type
varset = v[0].split("@")
parm_name = varset[0]
parm_datatype = "char"
# Does the variable exist?
if (parm_name not in globals()):
errormsg("SQL Execute parameter " + parm_name + " not found")
sqlcode = -99999
return(false)
if (len(varset) > 1): # Type provided
parm_datatype = varset[1]
if (parm_datatype == "dec" or parm_datatype == "decimal"):
sql_type = ibm_db.SQL_DOUBLE
elif (parm_datatype == "bin" or parm_datatype == "binary"):
sql_type = ibm_db.SQL_BINARY
elif (parm_datatype == "int" or parm_datatype == "integer"):
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
try:
if (parm_type == VARIABLE):
result = ibm_db.bind_param(stmt, parm_count, globals()[parm_name], ibm_db.SQL_PARAM_INPUT, sql_type)
else:
result = ibm_db.bind_param(stmt, parm_count, const[const_cnt], ibm_db.SQL_PARAM_INPUT, sql_type)
except:
result = False
if (result == False):
errormsg("SQL Bind on variable " + parm_name + " failed.")
sqlcode = -99999
return(false)
result = ibm_db.execute(stmt) # ,tuple(parms))
if (result == False):
errormsg("SQL Execute failed.")
return(False)
if (ibm_db.num_fields(stmt) == 0): return(True) # Command successfully completed
return(fetchResults(stmt))
except Exception as err:
db2_error(False)
return(False)
return(False)
return(False)
###Output
_____no_output_____
###Markdown
Fetch Result SetThis code will take the stmt handle and then produce a result set of rows as either an array (`-r`,`-array`) or as an array of json records (`-json`).
###Code
def fetchResults(stmt):
global sqlcode
rows = []
columns, types = getColumns(stmt)
# By default we assume that the data will be an array
is_array = True
# Check what type of data we want returned - array or json
if (flag(["-r","-array"]) == False):
# See if we want it in JSON format, if not it remains as an array
if (flag("-json") == True):
is_array = False
# Set column names to lowercase for JSON records
if (is_array == False):
columns = [col.lower() for col in columns] # Convert to lowercase for each of access
# First row of an array has the column names in it
if (is_array == True):
rows.append(columns)
result = ibm_db.fetch_tuple(stmt)
rowcount = 0
while (result):
rowcount += 1
if (is_array == True):
row = []
else:
row = {}
colcount = 0
for col in result:
try:
if (types[colcount] in ["int","bigint"]):
if (is_array == True):
row.append(int(col))
else:
row[columns[colcount]] = int(col)
elif (types[colcount] in ["decimal","real"]):
if (is_array == True):
row.append(float(col))
else:
row[columns[colcount]] = float(col)
elif (types[colcount] in ["date","time","timestamp"]):
if (is_array == True):
row.append(str(col))
else:
row[columns[colcount]] = str(col)
else:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
except:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
colcount += 1
rows.append(row)
result = ibm_db.fetch_tuple(stmt)
if (rowcount == 0):
sqlcode = 100
else:
sqlcode = 0
return rows
###Output
_____no_output_____
###Markdown
Parse CommitThere are three possible COMMIT verbs that can bs used:- COMMIT [WORK] - Commit the work in progress - The WORK keyword is not checked for- ROLLBACK - Roll back the unit of work- AUTOCOMMIT ON/OFF - Are statements committed on or off?The statement is passed to this routine and then checked.
###Code
def parseCommit(sql):
global _hdbc, _hdbi, _connected, _runtime, _stmt, _stmtID, _stmtSQL
if (_connected == False): return # Nothing to do if we are not connected
cParms = sql.split()
if (len(cParms) == 0): return # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "COMMIT"): # Commit the work that was done
try:
result = ibm_db.commit (_hdbc) # Commit the connection
if (len(cParms) > 1):
keyword = cParms[1].upper()
if (keyword == "HOLD"):
return
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "ROLLBACK"): # Rollback the work that was done
try:
result = ibm_db.rollback(_hdbc) # Rollback the connection
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "AUTOCOMMIT"): # Is autocommit on or off
if (len(cParms) > 1):
op = cParms[1].upper() # Need ON or OFF value
else:
return
try:
if (op == "OFF"):
ibm_db.autocommit(_hdbc, False)
elif (op == "ON"):
ibm_db.autocommit (_hdbc, True)
return
except Exception as err:
db2_error(False)
return
return
###Output
_____no_output_____
###Markdown
Set FlagsThis code will take the input SQL block and update the global flag list. The global flag list is just a list of options that are set at the beginning of a code block. The absence of a flag means it is false. If it exists it is true.
###Code
def setFlags(inSQL):
global _flags
_flags = [] # Delete all of the current flag settings
pos = 0
end = len(inSQL)-1
inFlag = False
ignore = False
outSQL = ""
flag = ""
while (pos <= end):
ch = inSQL[pos]
if (ignore == True):
outSQL = outSQL + ch
else:
if (inFlag == True):
if (ch != " "):
flag = flag + ch
else:
_flags.append(flag)
inFlag = False
else:
if (ch == "-"):
flag = "-"
inFlag = True
elif (ch == ' '):
outSQL = outSQL + ch
else:
outSQL = outSQL + ch
ignore = True
pos += 1
if (inFlag == True):
_flags.append(flag)
return outSQL
###Output
_____no_output_____
###Markdown
Check to see if flag ExistsThis function determines whether or not a flag exists in the global flag array. Absence of a value means it is false. The parameter can be a single value, or an array of values.
###Code
def flag(inflag):
global _flags
if isinstance(inflag,list):
for x in inflag:
if (x in _flags):
return True
return False
else:
if (inflag in _flags):
return True
else:
return False
###Output
_____no_output_____
###Markdown
Generate a list of SQL lines based on a delimiterNote that this function will make sure that quotes are properly maintained so that delimiters inside of quoted strings do not cause errors.
###Code
def splitSQL(inputString, delimiter):
pos = 0
arg = ""
results = []
quoteCH = ""
inSQL = inputString.strip()
if (len(inSQL) == 0): return(results) # Not much to do here - no args found
while pos < len(inSQL):
ch = inSQL[pos]
pos += 1
if (ch in ('"',"'")): # Is this a quote characters?
arg = arg + ch # Keep appending the characters to the current arg
if (ch == quoteCH): # Is this quote character we are in
quoteCH = ""
elif (quoteCH == ""): # Create the quote
quoteCH = ch
else:
None
elif (quoteCH != ""): # Still in a quote
arg = arg + ch
elif (ch == delimiter): # Is there a delimiter?
results.append(arg)
arg = ""
else:
arg = arg + ch
if (arg != ""):
results.append(arg)
return(results)
###Output
_____no_output_____
###Markdown
Main %sql Magic DefinitionThe main %sql Magic logic is found in this section of code. This code will register the Magic command and allow Jupyter notebooks to interact with Db2 by using this extension.
###Code
@magics_class
class DB2(Magics):
@needs_local_scope
@line_cell_magic
def sql(self, line, cell=None, local_ns=None):
# Before we event get started, check to see if you have connected yet. Without a connection we
# can't do anything. You may have a connection request in the code, so if that is true, we run those,
# otherwise we connect immediately
# If your statement is not a connect, and you haven't connected, we need to do it for you
global _settings, _environment
global _hdbc, _hdbi, _connected, _runtime, sqlstate, sqlerror, sqlcode, sqlelapsed
# If you use %sql (line) we just run the SQL. If you use %%SQL the entire cell is run.
flag_cell = False
flag_output = False
sqlstate = "0"
sqlerror = ""
sqlcode = 0
sqlelapsed = 0
start_time = time.time()
end_time = time.time()
# Macros gets expanded before anything is done
SQL1 = setFlags(line.strip())
SQL1 = checkMacro(SQL1) # Update the SQL if any macros are in there
SQL2 = cell
if flag("-sampledata"): # Check if you only want sample data loaded
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
db2_create_sample(flag(["-q","-quiet"]))
return
if SQL1 == "?" or flag(["-h","-help"]): # Are you asking for help
sqlhelp()
return
if len(SQL1) == 0 and SQL2 == None: return # Nothing to do here
# Check for help
if SQL1.upper() == "? CONNECT": # Are you asking for help on CONNECT
connected_help()
return
sqlType,remainder = sqlParser(SQL1,local_ns) # What type of command do you have?
if (sqlType == "CONNECT"): # A connect request
parseConnect(SQL1,local_ns)
return
elif (sqlType == "DEFINE"): # Create a macro from the body
result = setMacro(SQL2,remainder)
return
elif (sqlType == "OPTION"):
setOptions(SQL1)
return
elif (sqlType == 'COMMIT' or sqlType == 'ROLLBACK' or sqlType == 'AUTOCOMMIT'):
parseCommit(remainder)
return
elif (sqlType == "PREPARE"):
pstmt = parsePExec(_hdbc, remainder)
return(pstmt)
elif (sqlType == "EXECUTE"):
result = parsePExec(_hdbc, remainder)
return(result)
elif (sqlType == "CALL"):
result = parseCall(_hdbc, remainder, local_ns)
return(result)
else:
pass
sql = SQL1
if (sql == ""): sql = SQL2
if (sql == ""): return # Nothing to do here
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
if _settings["maxrows"] == -1: # Set the return result size
pandas.reset_option('display.max_rows')
else:
pandas.options.display.max_rows = _settings["maxrows"]
runSQL = re.sub('.*?--.*$',"",sql,flags=re.M)
remainder = runSQL.replace("\n"," ")
if flag(["-d","-delim"]):
sqlLines = splitSQL(remainder,"@")
else:
sqlLines = splitSQL(remainder,";")
flag_cell = True
# For each line figure out if you run it as a command (db2) or select (sql)
for sqlin in sqlLines: # Run each command
sqlin = checkMacro(sqlin) # Update based on any macros
sqlType, sql = sqlParser(sqlin,local_ns) # Parse the SQL
if (sql.strip() == ""): continue
if flag(["-e","-echo"]): debug(sql,False)
if flag("-t"):
cnt = sqlTimer(_hdbc, _settings["runtime"], sql) # Given the sql and parameters, clock the time
if (cnt >= 0): print("Total iterations in %s second(s): %s" % (_settings["runtime"],cnt))
return(cnt)
elif flag(["-pb","-bar","-pp","-pie","-pl","-line"]): # We are plotting some results
plotData(_hdbi, sql) # Plot the data and return
return
else:
try: # See if we have an answer set
stmt = ibm_db.prepare(_hdbc,sql)
if (ibm_db.num_fields(stmt) == 0): # No, so we just execute the code
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
continue
rowcount = ibm_db.num_rows(stmt)
if (rowcount == 0 and flag(["-q","-quiet"]) == False):
errormsg("No rows found.")
continue # Continue running
elif flag(["-r","-array","-j","-json"]): # raw, json, format json
row_count = 0
resultSet = []
try:
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
return
if flag("-j"): # JSON single output
row_count = 0
json_results = []
while( ibm_db.fetch_row(stmt) ):
row_count = row_count + 1
jsonVal = ibm_db.result(stmt,0)
jsonDict = json.loads(jsonVal)
json_results.append(jsonDict)
flag_output = True
if (row_count == 0): sqlcode = 100
return(json_results)
else:
return(fetchResults(stmt))
except Exception as err:
db2_error(flag(["-q","-quiet"]))
return
else:
try:
df = pandas.read_sql(sql,_hdbi)
except Exception as err:
db2_error(False)
return
if (len(df) == 0):
sqlcode = 100
if (flag(["-q","-quiet"]) == False):
errormsg("No rows found")
continue
flag_output = True
if flag("-grid") or _settings['display'] == 'GRID': # Check to see if we can display the results
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
print(df.to_string())
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
pandas.options.display.max_rows = None
pandas.options.display.max_columns = None
return df # print(df.to_string())
else:
pandas.options.display.max_rows = _settings["maxrows"]
pandas.options.display.max_columns = None
return df # pdisplay(df) # print(df.to_string())
except:
db2_error(flag(["-q","-quiet"]))
continue # return
end_time = time.time()
sqlelapsed = end_time - start_time
if (flag_output == False and flag(["-q","-quiet"]) == False): print("Command completed.")
# Register the Magic extension in Jupyter
ip = get_ipython()
ip.register_magics(DB2)
load_settings()
success("Db2 Extensions Loaded.")
###Output
_____no_output_____
###Markdown
Pre-defined MacrosThese macros are used to simulate the LIST TABLES and DESCRIBE commands that are available from within the Db2 command line.
###Code
%%sql define LIST
#
# The LIST macro is used to list all of the tables in the current schema or for all schemas
#
var syntax Syntax: LIST TABLES [FOR ALL | FOR SCHEMA name]
#
# Only LIST TABLES is supported by this macro
#
if {^1} <> 'TABLES'
exit {syntax}
endif
#
# This SQL is a temporary table that contains the description of the different table types
#
WITH TYPES(TYPE,DESCRIPTION) AS (
VALUES
('A','Alias'),
('G','Created temporary table'),
('H','Hierarchy table'),
('L','Detached table'),
('N','Nickname'),
('S','Materialized query table'),
('T','Table'),
('U','Typed table'),
('V','View'),
('W','Typed view')
)
SELECT TABNAME, TABSCHEMA, T.DESCRIPTION FROM SYSCAT.TABLES S, TYPES T
WHERE T.TYPE = S.TYPE
#
# Case 1: No arguments - LIST TABLES
#
if {argc} == 1
AND OWNER = CURRENT USER
ORDER BY TABNAME, TABSCHEMA
return
endif
#
# Case 2: Need 3 arguments - LIST TABLES FOR ALL
#
if {argc} == 3
if {^2}&{^3} == 'FOR&ALL'
ORDER BY TABNAME, TABSCHEMA
return
endif
exit {syntax}
endif
#
# Case 3: Need FOR SCHEMA something here
#
if {argc} == 4
if {^2}&{^3} == 'FOR&SCHEMA'
AND TABSCHEMA = '{^4}'
ORDER BY TABNAME, TABSCHEMA
return
else
exit {syntax}
endif
endif
#
# Nothing matched - Error
#
exit {syntax}
%%sql define describe
#
# The DESCRIBE command can either use the syntax DESCRIBE TABLE <name> or DESCRIBE TABLE SELECT ...
#
var syntax Syntax: DESCRIBE [TABLE name | SELECT statement]
#
# Check to see what count of variables is... Must be at least 2 items DESCRIBE TABLE x or SELECT x
#
if {argc} < 2
exit {syntax}
endif
CALL ADMIN_CMD('{*0}');
###Output
_____no_output_____
###Markdown
Set the table formatting to left align a table in a cell. By default, tables are centered in a cell. Remove this cell if you don't want to change Jupyter notebook formatting for tables. In addition, we skip this code if you are running in a shell environment rather than a Jupyter notebook
###Code
#%%html
#<style>
# table {margin-left: 0 !important; text-align: left;}
#</style>
###Output
_____no_output_____
###Markdown
DB2 Jupyter Notebook ExtensionsVersion: 2020-04-27 This code is imported as a Jupyter notebook extension in any notebooks you create with DB2 code in it. Place the following line of code in any notebook that you want to use these commands with:&37;run db2.ipynbThis code defines a Jupyter/Python magic command called `%sql` which allows you to execute DB2 specific calls to the database. There are other packages available for manipulating databases, but this one has been specificallydesigned for demonstrating a number of the SQL features available in DB2.There are two ways of executing the `%sql` command. A single line SQL statement would use theline format of the magic command:%sql SELECT * FROM EMPLOYEEIf you have a large block of sql then you would place the %%sql command at the beginning of the block and thenplace the SQL statements into the remainder of the block. Using this form of the `%%sql` statement means that thenotebook cell can only contain SQL and no other statements.%%sqlSELECT * FROM EMPLOYEEORDER BY LASTNAMEYou can have multiple lines in the SQL block (`%%sql`). The default SQL delimiter is the semi-column (`;`).If you have scripts (triggers, procedures, functions) that use the semi-colon as part of the script, you will need to use the `-d` option to change the delimiter to an at "`@`" sign. %%sql -dSELECT * FROM EMPLOYEE@CREATE PROCEDURE ...@The `%sql` command allows most DB2 commands to execute and has a special version of the CONNECT statement. A CONNECT by itself will attempt to reconnect to the database using previously used settings. If it cannot connect, it will prompt the user for additional information. The CONNECT command has the following format:%sql CONNECT TO <database> USER <userid> USING <password | ?> HOST <ip address> PORT <port number>If you use a "`?`" for the password field, the system will prompt you for a password. This avoids typing the password as clear text on the screen. If a connection is not successful, the system will print the errormessage associated with the connect request.If the connection is successful, the parameters are saved on your system and will be used the next time yourun a SQL statement, or when you issue the %sql CONNECT command with no parameters. In addition to the -d option, there are a number different options that you can specify at the beginning of the SQL: - `-d, -delim` - Change SQL delimiter to "`@`" from "`;`" - `-q, -quiet` - Quiet results - no messages returned from the function - `-r, -array` - Return the result set as an array of values instead of a dataframe - `-t, -time` - Time the following SQL statement and return the number of times it executes in 1 second - `-j` - Format the first character column of the result set as a JSON record - `-json` - Return result set as an array of json records - `-a, -all` - Return all rows in answer set and do not limit display - `-grid` - Display the results in a scrollable grid - `-pb, -bar` - Plot the results as a bar chart - `-pl, -line` - Plot the results as a line chart - `-pp, -pie` - Plot the results as a pie chart - `-e, -echo` - Any macro expansions are displayed in an output box - `-sampledata` - Create and load the EMPLOYEE and DEPARTMENT tablesYou can pass python variables to the `%sql` command by using the `{}` braces with the name of thevariable inbetween. Note that you will need to place proper punctuation around the variable in the event theSQL command requires it. For instance, the following example will find employee '000010' in the EMPLOYEE table.empno = '000010'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO='{empno}'The other option is to use parameter markers. What you would need to do is use the name of the variable with a colon in front of it and the program will prepare the statement and then pass the variable to Db2 when the statement is executed. This allows you to create complex strings that might contain quote characters and other special characters and not have to worry about enclosing the string with the correct quotes. Note that you do not place the quotes around the variable even though it is a string.empno = '000020'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=:empno Development SQLThe previous set of `%sql` and `%%sql` commands deals with SQL statements and commands that are run in an interactive manner. There is a class of SQL commands that are more suited to a development environment where code is iterated or requires changing input. The commands that are associated with this form of SQL are:- AUTOCOMMIT- COMMIT/ROLLBACK- PREPARE - EXECUTEAutocommit is the default manner in which SQL statements are executed. At the end of the successful completion of a statement, the results are commited to the database. There is no concept of a transaction where multiple DML/DDL statements are considered one transaction. The `AUTOCOMMIT` command allows you to turn autocommit `OFF` or `ON`. This means that the set of SQL commands run after the `AUTOCOMMIT OFF` command are executed are not commited to the database until a `COMMIT` or `ROLLBACK` command is issued.`COMMIT` (`WORK`) will finalize all of the transactions (`COMMIT`) to the database and `ROLLBACK` will undo all of the changes. If you issue a `SELECT` statement during the execution of your block, the results will reflect all of your changes. If you `ROLLBACK` the transaction, the changes will be lost.`PREPARE` is typically used in a situation where you want to repeatidly execute a SQL statement with different variables without incurring the SQL compilation overhead. For instance:```x = %sql PREPARE SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=?for y in ['000010','000020','000030']: %sql execute :x using :y````EXECUTE` is used to execute a previously compiled statement. To retrieve the error codes that might be associated with any SQL call, the following variables are updated after every call:* SQLCODE* SQLSTATE* SQLERROR - Full error message retrieved from Db2 Install Db2 Python DriverIf the ibm_db driver is not installed on your system, the subsequent Db2 commands will fail. In order to install the Db2 driver, issue the following command from a Jupyter notebook cell:```!pip install --user ibm_db``` Db2 Jupyter ExtensionsThis section of code has the import statements and global variables defined for the remainder of the functions.
###Code
#
# Set up Jupyter MAGIC commands "sql".
# %sql will return results from a DB2 select statement or execute a DB2 command
#
# IBM 2019: George Baklarz
# Version 2019-10-03
#
from __future__ import print_function
from IPython.display import HTML as pHTML, Image as pImage, display as pdisplay, Javascript as Javascript
from IPython.core.magic import (Magics, magics_class, line_magic,
cell_magic, line_cell_magic, needs_local_scope)
import ibm_db
import pandas
import ibm_db_dbi
import json
import matplotlib
import matplotlib.pyplot as plt
import getpass
import os
import pickle
import time
import sys
import re
import warnings
warnings.filterwarnings("ignore")
# Python Hack for Input between 2 and 3
try:
input = raw_input
except NameError:
pass
_settings = {
"maxrows" : 10,
"maxgrid" : 5,
"runtime" : 1,
"display" : "PANDAS",
"database" : "",
"hostname" : "localhost",
"port" : "50000",
"protocol" : "TCPIP",
"uid" : "DB2INST1",
"pwd" : "password",
"ssl" : ""
}
_environment = {
"jupyter" : True,
"qgrid" : True
}
_display = {
'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': False,
'defaultColumnWidth': 150,
'rowHeight': 28,
'enableColumnReorder': False,
'enableTextSelectionOnCells': True,
'editable': False,
'autoEdit': False,
'explicitInitialization': True,
'maxVisibleRows': 5,
'minVisibleRows': 5,
'sortable': True,
'filterable': False,
'highlightSelectedCell': False,
'highlightSelectedRow': True
}
# Connection settings for statements
_connected = False
_hdbc = None
_hdbi = None
_stmt = []
_stmtID = []
_stmtSQL = []
_vars = {}
_macros = {}
_flags = []
_debug = False
# Db2 Error Messages and Codes
sqlcode = 0
sqlstate = "0"
sqlerror = ""
sqlelapsed = 0
# Check to see if QGrid is installed
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
# Check if we are running in iPython or Jupyter
try:
if (get_ipython().config == {}):
_environment['jupyter'] = False
_environment['qgrid'] = False
else:
_environment['jupyter'] = True
except:
_environment['jupyter'] = False
_environment['qgrid'] = False
###Output
_____no_output_____
###Markdown
OptionsThere are four options that can be set with the **`%sql`** command. These options are shown below with the default value shown in parenthesis.- **`MAXROWS n (10)`** - The maximum number of rows that will be displayed before summary information is shown. If the answer set is less than this number of rows, it will be completely shown on the screen. If the answer set is larger than this amount, only the first 5 rows and last 5 rows of the answer set will be displayed. If you want to display a very large answer set, you may want to consider using the grid option `-g` to display the results in a scrollable table. If you really want to show all results then setting MAXROWS to -1 will return all output.- **`MAXGRID n (5)`** - The maximum size of a grid display. When displaying a result set in a grid `-g`, the default size of the display window is 5 rows. You can set this to a larger size so that more rows are shown on the screen. Note that the minimum size always remains at 5 which means that if the system is unable to display your maximum row size it will reduce the table display until it fits.- **`DISPLAY PANDAS | GRID (PANDAS)`** - Display the results as a PANDAS dataframe (default) or as a scrollable GRID- **`RUNTIME n (1)`** - When using the timer option on a SQL statement, the statement will execute for **`n`** number of seconds. The result that is returned is the number of times the SQL statement executed rather than the execution time of the statement. The default value for runtime is one second, so if the SQL is very complex you will need to increase the run time.- **`LIST`** - Display the current settingsTo set an option use the following syntax:```%sql option option_name value option_name value ....```The following example sets all options:```%sql option maxrows 100 runtime 2 display grid maxgrid 10```The values will **not** be saved between Jupyter notebooks sessions. If you need to retrieve the current options values, use the LIST command as the only argument:```%sql option list```
###Code
def setOptions(inSQL):
global _settings, _display
cParms = inSQL.split()
cnt = 0
while cnt < len(cParms):
if cParms[cnt].upper() == 'MAXROWS':
if cnt+1 < len(cParms):
try:
_settings["maxrows"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid MAXROWS value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'MAXGRID':
if cnt+1 < len(cParms):
try:
maxgrid = int(cParms[cnt+1])
if (maxgrid <= 5): # Minimum window size is 5
maxgrid = 5
_display["maxVisibleRows"] = int(cParms[cnt+1])
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
except Exception as err:
errormsg("Invalid MAXGRID value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'RUNTIME':
if cnt+1 < len(cParms):
try:
_settings["runtime"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid RUNTIME value provided.")
pass
cnt = cnt + 1
else:
errormsg("No value provided for the RUNTIME option.")
return
elif cParms[cnt].upper() == 'DISPLAY':
if cnt+1 < len(cParms):
if (cParms[cnt+1].upper() == 'GRID'):
_settings["display"] = 'GRID'
elif (cParms[cnt+1].upper() == 'PANDAS'):
_settings["display"] = 'PANDAS'
else:
errormsg("Invalid DISPLAY value provided.")
cnt = cnt + 1
else:
errormsg("No value provided for the DISPLAY option.")
return
elif (cParms[cnt].upper() == 'LIST'):
print("(MAXROWS) Maximum number of rows displayed: " + str(_settings["maxrows"]))
print("(MAXGRID) Maximum grid display size: " + str(_settings["maxgrid"]))
print("(RUNTIME) How many seconds to a run a statement for performance testing: " + str(_settings["runtime"]))
print("(DISPLAY) Use PANDAS or GRID display format for output: " + _settings["display"])
return
else:
cnt = cnt + 1
save_settings()
###Output
_____no_output_____
###Markdown
SQL HelpThe calling format of this routine is:```sqlhelp()```This code displays help related to the %sql magic command. This help is displayed when you issue a %sql or %%sql command by itself, or use the %sql -h flag.
###Code
def sqlhelp():
global _environment
if (_environment["jupyter"] == True):
sd = '<td style="text-align:left;">'
ed1 = '</td>'
ed2 = '</td>'
sh = '<th style="text-align:left;">'
eh1 = '</th>'
eh2 = '</th>'
sr = '<tr>'
er = '</tr>'
helpSQL = """
<h3>SQL Options</h3>
<p>The following options are available as part of a SQL statement. The options are always preceded with a
minus sign (i.e. -q).
<table>
{sr}
{sh}Option{eh1}{sh}Description{eh2}
{er}
{sr}
{sd}a, all{ed1}{sd}Return all rows in answer set and do not limit display{ed2}
{er}
{sr}
{sd}d{ed1}{sd}Change SQL delimiter to "@" from ";"{ed2}
{er}
{sr}
{sd}e, echo{ed1}{sd}Echo the SQL command that was generated after macro and variable substituion.{ed2}
{er}
{sr}
{sd}h, help{ed1}{sd}Display %sql help information.{ed2}
{er}
{sr}
{sd}j{ed1}{sd}Create a pretty JSON representation. Only the first column is formatted{ed2}
{er}
{sr}
{sd}json{ed1}{sd}Retrieve the result set as a JSON record{ed2}
{er}
{sr}
{sd}pb, bar{ed1}{sd}Plot the results as a bar chart{ed2}
{er}
{sr}
{sd}pl, line{ed1}{sd}Plot the results as a line chart{ed2}
{er}
{sr}
{sd}pp, pie{ed1}{sd}Plot Pie: Plot the results as a pie chart{ed2}
{er}
{sr}
{sd}q, quiet{ed1}{sd}Quiet results - no answer set or messages returned from the function{ed2}
{er}
{sr}
{sd}r, array{ed1}{sd}Return the result set as an array of values{ed2}
{er}
{sr}
{sd}sampledata{ed1}{sd}Create and load the EMPLOYEE and DEPARTMENT tables{ed2}
{er}
{sr}
{sd}t,time{ed1}{sd}Time the following SQL statement and return the number of times it executes in 1 second{ed2}
{er}
{sr}
{sd}grid{ed1}{sd}Display the results in a scrollable grid{ed2}
{er}
</table>
"""
else:
helpSQL = """
SQL Options
The following options are available as part of a SQL statement. Options are always
preceded with a minus sign (i.e. -q).
Option Description
a, all Return all rows in answer set and do not limit display
d Change SQL delimiter to "@" from ";"
e, echo Echo the SQL command that was generated after substitution
h, help Display %sql help information
j Create a pretty JSON representation. Only the first column is formatted
json Retrieve the result set as a JSON record
pb, bar Plot the results as a bar chart
pl, line Plot the results as a line chart
pp, pie Plot Pie: Plot the results as a pie chart
q, quiet Quiet results - no answer set or messages returned from the function
r, array Return the result set as an array of values
sampledata Create and load the EMPLOYEE and DEPARTMENT tables
t,time Time the SQL statement and return the execution count per second
grid Display the results in a scrollable grid
"""
helpSQL = helpSQL.format(**locals())
if (_environment["jupyter"] == True):
pdisplay(pHTML(helpSQL))
else:
print(helpSQL)
###Output
_____no_output_____
###Markdown
Connection HelpThe calling format of this routine is:```connected_help()```This code displays help related to the CONNECT command. This code is displayed when you issue a %sql CONNECT command with no arguments or you are running a SQL statement and there isn't any connection to a database yet.
###Code
def connected_help():
sd = '<td style="text-align:left;">'
ed = '</td>'
sh = '<th style="text-align:left;">'
eh = '</th>'
sr = '<tr>'
er = '</tr>'
if (_environment['jupyter'] == True):
helpConnect = """
<h3>Connecting to Db2</h3>
<p>The CONNECT command has the following format:
<p>
<pre>
%sql CONNECT TO <database> USER <userid> USING <password|?> HOST <ip address> PORT <port number> <SSL>
%sql CONNECT CREDENTIALS <varname>
%sql CONNECT CLOSE
%sql CONNECT RESET
%sql CONNECT PROMPT - use this to be prompted for values
</pre>
<p>
If you use a "?" for the password field, the system will prompt you for a password. This avoids typing the
password as clear text on the screen. If a connection is not successful, the system will print the error
message associated with the connect request.
<p>
The <b>CREDENTIALS</b> option allows you to use credentials that are supplied by Db2 on Cloud instances.
The credentials can be supplied as a variable and if successful, the variable will be saved to disk
for future use. If you create another notebook and use the identical syntax, if the variable
is not defined, the contents on disk will be used as the credentials. You should assign the
credentials to a variable that represents the database (or schema) that you are communicating with.
Using familiar names makes it easier to remember the credentials when connecting.
<p>
<b>CONNECT CLOSE</b> will close the current connection, but will not reset the database parameters. This means that
if you issue the CONNECT command again, the system should be able to reconnect you to the database.
<p>
<b>CONNECT RESET</b> will close the current connection and remove any information on the connection. You will need
to issue a new CONNECT statement with all of the connection information.
<p>
If the connection is successful, the parameters are saved on your system and will be used the next time you
run an SQL statement, or when you issue the %sql CONNECT command with no parameters.
<p>If you issue CONNECT RESET, all of the current values will be deleted and you will need to
issue a new CONNECT statement.
<p>A CONNECT command without any parameters will attempt to re-connect to the previous database you
were using. If the connection could not be established, the program to prompt you for
the values. To cancel the connection attempt, enter a blank value for any of the values. The connection
panel will request the following values in order to connect to Db2:
<table>
{sr}
{sh}Setting{eh}
{sh}Description{eh}
{er}
{sr}
{sd}Database{ed}{sd}Database name you want to connect to.{ed}
{er}
{sr}
{sd}Hostname{ed}
{sd}Use localhost if Db2 is running on your own machine, but this can be an IP address or host name.
{er}
{sr}
{sd}PORT{ed}
{sd}The port to use for connecting to Db2. This is usually 50000.{ed}
{er}
{sr}
{sd}SSL{ed}
{sd}If you are connecting to a secure port (50001) with SSL then you must include this keyword in the connect string.{ed}
{sr}
{sd}Userid{ed}
{sd}The userid to use when connecting (usually DB2INST1){ed}
{er}
{sr}
{sd}Password{ed}
{sd}No password is provided so you have to enter a value{ed}
{er}
</table>
"""
else:
helpConnect = """\
Connecting to Db2
The CONNECT command has the following format:
%sql CONNECT TO database USER userid USING password | ?
HOST ip address PORT port number SSL
%sql CONNECT CREDENTIALS varname
%sql CONNECT CLOSE
%sql CONNECT RESET
If you use a "?" for the password field, the system will prompt you for a password.
This avoids typing the password as clear text on the screen. If a connection is
not successful, the system will print the error message associated with the connect
request.
The CREDENTIALS option allows you to use credentials that are supplied by Db2 on
Cloud instances. The credentials can be supplied as a variable and if successful,
the variable will be saved to disk for future use. If you create another notebook
and use the identical syntax, if the variable is not defined, the contents on disk
will be used as the credentials. You should assign the credentials to a variable
that represents the database (or schema) that you are communicating with. Using
familiar names makes it easier to remember the credentials when connecting.
CONNECT CLOSE will close the current connection, but will not reset the database
parameters. This means that if you issue the CONNECT command again, the system
should be able to reconnect you to the database.
CONNECT RESET will close the current connection and remove any information on the
connection. You will need to issue a new CONNECT statement with all of the connection
information.
If the connection is successful, the parameters are saved on your system and will be
used the next time you run an SQL statement, or when you issue the %sql CONNECT
command with no parameters. If you issue CONNECT RESET, all of the current values
will be deleted and you will need to issue a new CONNECT statement.
A CONNECT command without any parameters will attempt to re-connect to the previous
database you were using. If the connection could not be established, the program to
prompt you for the values. To cancel the connection attempt, enter a blank value for
any of the values. The connection panel will request the following values in order
to connect to Db2:
Setting Description
Database Database name you want to connect to
Hostname Use localhost if Db2 is running on your own machine, but this can
be an IP address or host name.
PORT The port to use for connecting to Db2. This is usually 50000.
Userid The userid to use when connecting (usually DB2INST1)
Password No password is provided so you have to enter a value
SSL Include this keyword to indicate you are connecting via SSL (usually port 50001)
"""
helpConnect = helpConnect.format(**locals())
if (_environment['jupyter'] == True):
pdisplay(pHTML(helpConnect))
else:
print(helpConnect)
###Output
_____no_output_____
###Markdown
Prompt for Connection InformationIf you are running an SQL statement and have not yet connected to a database, the %sql command will prompt you for connection information. In order to connect to a database, you must supply:- Database name - Host name (IP address or name)- Port number- Userid- Password- Secure socketThe routine is called without any parameters:```connected_prompt()```
###Code
# Prompt for Connection information
def connected_prompt():
global _settings
_database = ''
_hostname = ''
_port = ''
_uid = ''
_pwd = ''
_ssl = ''
print("Enter the database connection details (Any empty value will cancel the connection)")
_database = input("Enter the database name: ");
if (_database.strip() == ""): return False
_hostname = input("Enter the HOST IP address or symbolic name: ");
if (_hostname.strip() == ""): return False
_port = input("Enter the PORT number: ");
if (_port.strip() == ""): return False
_ssl = input("Is this a secure (SSL) port (y or n)");
if (_ssl.strip() == ""): return False
if (_ssl == "n"):
_ssl = ""
else:
_ssl = "Security=SSL;"
_uid = input("Enter Userid on the DB2 system: ").upper();
if (_uid.strip() == ""): return False
_pwd = getpass.getpass("Password [password]: ");
if (_pwd.strip() == ""): return False
_settings["database"] = _database.strip()
_settings["hostname"] = _hostname.strip()
_settings["port"] = _port.strip()
_settings["uid"] = _uid.strip()
_settings["pwd"] = _pwd.strip()
_settings["ssl"] = _ssl.strip()
_settings["maxrows"] = 10
_settings["maxgrid"] = 5
_settings["runtime"] = 1
return True
# Split port and IP addresses
def split_string(in_port,splitter=":"):
# Split input into an IP address and Port number
global _settings
checkports = in_port.split(splitter)
ip = checkports[0]
if (len(checkports) > 1):
port = checkports[1]
else:
port = None
return ip, port
###Output
_____no_output_____
###Markdown
Connect Syntax ParserThe parseConnect routine is used to parse the CONNECT command that the user issued within the %sql command. The format of the command is:```parseConnect(inSQL)```The inSQL string contains the CONNECT keyword with some additional parameters. The format of the CONNECT command is one of:```CONNECT RESETCONNECT CLOSECONNECT CREDENTIALS CONNECT TO database USER userid USING password HOST hostname PORT portnumber ```If you have credentials available from Db2 on Cloud, place the contents of the credentials into a variable and then use the `CONNECT CREDENTIALS ` syntax to connect to the database.In addition, supplying a question mark (?) for password will result in the program prompting you for the password rather than having it as clear text in your scripts.When all of the information is checked in the command, the db2_doConnect function is called to actually do the connection to the database.
###Code
# Parse the CONNECT statement and execute if possible
def parseConnect(inSQL,local_ns):
global _settings, _connected
_connected = False
cParms = inSQL.split()
cnt = 0
_settings["ssl"] = ""
while cnt < len(cParms):
if cParms[cnt].upper() == 'TO':
if cnt+1 < len(cParms):
_settings["database"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No database specified in the CONNECT statement")
return
elif cParms[cnt].upper() == "SSL":
_settings["ssl"] = "Security=SSL;"
cnt = cnt + 1
elif cParms[cnt].upper() == 'CREDENTIALS':
if cnt+1 < len(cParms):
credentials = cParms[cnt+1]
tempid = eval(credentials,local_ns)
if (isinstance(tempid,dict) == False):
errormsg("The CREDENTIALS variable (" + credentials + ") does not contain a valid Python dictionary (JSON object)")
return
if (tempid == None):
fname = credentials + ".pickle"
try:
with open(fname,'rb') as f:
_id = pickle.load(f)
except:
errormsg("Unable to find credential variable or file.")
return
else:
_id = tempid
try:
_settings["database"] = _id["db"]
_settings["hostname"] = _id["hostname"]
_settings["port"] = _id["port"]
_settings["uid"] = _id["username"]
_settings["pwd"] = _id["password"]
try:
fname = credentials + ".pickle"
with open(fname,'wb') as f:
pickle.dump(_id,f)
except:
errormsg("Failed trying to write Db2 Credentials.")
return
except:
errormsg("Credentials file is missing information. db/hostname/port/username/password required.")
return
else:
errormsg("No Credentials name supplied")
return
cnt = cnt + 1
elif cParms[cnt].upper() == 'USER':
if cnt+1 < len(cParms):
_settings["uid"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No userid specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'USING':
if cnt+1 < len(cParms):
_settings["pwd"] = cParms[cnt+1]
if (_settings["pwd"] == '?'):
_settings["pwd"] = getpass.getpass("Password [password]: ") or "password"
cnt = cnt + 1
else:
errormsg("No password specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'HOST':
if cnt+1 < len(cParms):
hostport = cParms[cnt+1].upper()
ip, port = split_string(hostport)
if (port == None): _settings["port"] = "50000"
_settings["hostname"] = ip
cnt = cnt + 1
else:
errormsg("No hostname specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PORT':
if cnt+1 < len(cParms):
_settings["port"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No port specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PROMPT':
if (connected_prompt() == False):
print("Connection canceled.")
return
else:
cnt = cnt + 1
elif cParms[cnt].upper() in ('CLOSE','RESET') :
try:
result = ibm_db.close(_hdbc)
_hdbi.close()
except:
pass
success("Connection closed.")
if cParms[cnt].upper() == 'RESET':
_settings["database"] = ''
return
else:
cnt = cnt + 1
_ = db2_doConnect()
###Output
_____no_output_____
###Markdown
Connect to Db2The db2_doConnect routine is called when a connection needs to be established to a Db2 database. The command does not require any parameters since it relies on the settings variable which contains all of the information it needs to connect to a Db2 database.```db2_doConnect()```There are 4 additional variables that are used throughout the routines to stay connected with the Db2 database. These variables are:- hdbc - The connection handle to the database- hstmt - A statement handle used for executing SQL statements- connected - A flag that tells the program whether or not we are currently connected to a database- runtime - Used to tell %sql the length of time (default 1 second) to run a statement when timing itThe only database driver that is used in this program is the IBM DB2 ODBC DRIVER. This driver needs to be loaded on the system that is connecting to Db2. The Jupyter notebook that is built by this system installs the driver for you so you shouldn't have to do anything other than build the container.If the connection is successful, the connected flag is set to True. Any subsequent %sql call will check to see if you are connected and initiate another prompted connection if you do not have a connection to a database.
###Code
def db2_doConnect():
global _hdbc, _hdbi, _connected, _runtime
global _settings
if _connected == False:
if len(_settings["database"]) == 0:
return False
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;"
"UID={3};"
"PWD={4};{5}").format(_settings["database"],
_settings["hostname"],
_settings["port"],
_settings["uid"],
_settings["pwd"],
_settings["ssl"])
# Get a database handle (hdbc) and a statement handle (hstmt) for subsequent access to DB2
try:
_hdbc = ibm_db.connect(dsn, "", "")
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
try:
_hdbi = ibm_db_dbi.Connection(_hdbc)
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
_connected = True
# Save the values for future use
save_settings()
success("Connection successful.")
return True
###Output
_____no_output_____
###Markdown
Load/Save SettingsThere are two routines that load and save settings between Jupyter notebooks. These routines are called without any parameters.```load_settings() save_settings()```There is a global structure called settings which contains the following fields:```_settings = { "maxrows" : 10, "maxgrid" : 5, "runtime" : 1, "display" : "TEXT", "database" : "", "hostname" : "localhost", "port" : "50000", "protocol" : "TCPIP", "uid" : "DB2INST1", "pwd" : "password"}```The information in the settings structure is used for re-connecting to a database when you start up a Jupyter notebook. When the session is established for the first time, the load_settings() function is called to get the contents of the pickle file (db2connect.pickle, a Jupyter session file) that will be used for the first connection to the database. Whenever a new connection is made, the file is updated with the save_settings() function.
###Code
def load_settings():
# This routine will load the settings from the previous session if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'rb') as f:
_settings = pickle.load(f)
# Reset runtime to 1 since it would be unexpected to keep the same value between connections
_settings["runtime"] = 1
_settings["maxgrid"] = 5
except:
pass
return
def save_settings():
# This routine will save the current settings if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'wb') as f:
pickle.dump(_settings,f)
except:
errormsg("Failed trying to write Db2 Configuration Information.")
return
###Output
_____no_output_____
###Markdown
Error and Message FunctionsThere are three types of messages that are thrown by the %db2 magic command. The first routine will print out a success message with no special formatting:```success(message)```The second message is used for displaying an error message that is not associated with a SQL error. This type of error message is surrounded with a red box to highlight the problem. Note that the success message has code that has been commented out that could also show a successful return code with a green box. ```errormsg(message)```The final error message is based on an error occuring in the SQL code that was executed. This code will parse the message returned from the ibm_db interface and parse it to return only the error message portion (and not all of the wrapper code from the driver).```db2_error(quiet,connect=False)```The quiet flag is passed to the db2_error routine so that messages can be suppressed if the user wishes to ignore them with the -q flag. A good example of this is dropping a table that does not exist. We know that an error will be thrown so we can ignore it. The information that the db2_error routine gets is from the stmt_errormsg() function from within the ibm_db driver. The db2_error function should only be called after a SQL failure otherwise there will be no diagnostic information returned from stmt_errormsg().If the connect flag is True, the routine will get the SQLSTATE and SQLCODE from the connection error message rather than a statement error message.
###Code
def db2_error(quiet,connect=False):
global sqlerror, sqlcode, sqlstate, _environment
try:
if (connect == False):
errmsg = ibm_db.stmt_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
else:
errmsg = ibm_db.conn_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
sqlerror = errmsg
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
except:
errmsg = "Unknown error."
sqlcode = -99999
sqlstate = "-99999"
sqlerror = errmsg
return
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
if quiet == True: return
if (errmsg == ""): return
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html+errmsg+"</p>"))
else:
print(errmsg)
# Print out an error message
def errormsg(message):
global _environment
if (message != ""):
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + message + "</p>"))
else:
print(message)
def success(message):
if (message != ""):
print(message)
return
def debug(message,error=False):
global _environment
if (_environment["jupyter"] == True):
spacer = "<br>" + " "
else:
spacer = "\n "
if (message != ""):
lines = message.split('\n')
msg = ""
indent = 0
for line in lines:
delta = line.count("(") - line.count(")")
if (msg == ""):
msg = line
indent = indent + delta
else:
if (delta < 0): indent = indent + delta
msg = msg + spacer * (indent*2) + line
if (delta > 0): indent = indent + delta
if (indent < 0): indent = 0
if (error == True):
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
else:
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#008000; background-color:#e6ffe6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + msg + "</pre></p>"))
else:
print(msg)
return
###Output
_____no_output_____
###Markdown
Macro ProcessorA macro is used to generate SQL to be executed by overriding or creating a new keyword. For instance, the base `%sql` command does not understand the `LIST TABLES` command which is usually used in conjunction with the `CLP` processor. Rather than specifically code this in the base `db2.ipynb` file, we can create a macro that can execute this code for us.There are three routines that deal with macros. - checkMacro is used to find the macro calls in a string. All macros are sent to parseMacro for checking.- runMacro will evaluate the macro and return the string to the parse- subvars is used to track the variables used as part of a macro call.- setMacro is used to catalog a macro Set MacroThis code will catalog a macro call.
###Code
def setMacro(inSQL,parms):
global _macros
names = parms.split()
if (len(names) < 2):
errormsg("No command name supplied.")
return None
macroName = names[1].upper()
_macros[macroName] = inSQL
return
###Output
_____no_output_____
###Markdown
Check MacroThis code will check to see if there is a macro command in the SQL. It will take the SQL that is supplied and strip out three values: the first and second keywords, and the remainder of the parameters.For instance, consider the following statement:```CREATE DATABASE GEORGE options....```The name of the macro that we want to run is called `CREATE`. We know that there is a SQL command called `CREATE` but this code will call the macro first to see if needs to run any special code. For instance, `CREATE DATABASE` is not part of the `db2.ipynb` syntax, but we can add it in by using a macro.The check macro logic will strip out the subcommand (`DATABASE`) and place the remainder of the string after `DATABASE` in options.
###Code
def checkMacro(in_sql):
global _macros
if (len(in_sql) == 0): return(in_sql) # Nothing to do
tokens = parseArgs(in_sql,None) # Take the string and reduce into tokens
macro_name = tokens[0].upper() # Uppercase the name of the token
if (macro_name not in _macros):
return(in_sql) # No macro by this name so just return the string
result = runMacro(_macros[macro_name],in_sql,tokens) # Execute the macro using the tokens we found
return(result) # Runmacro will either return the original SQL or the new one
###Output
_____no_output_____
###Markdown
Split AssignmentThis routine will return the name of a variable and it's value when the format is x=y. If y is enclosed in quotes, the quotes are removed.
###Code
def splitassign(arg):
var_name = "null"
var_value = "null"
arg = arg.strip()
eq = arg.find("=")
if (eq != -1):
var_name = arg[:eq].strip()
temp_value = arg[eq+1:].strip()
if (temp_value != ""):
ch = temp_value[0]
if (ch in ["'",'"']):
if (temp_value[-1:] == ch):
var_value = temp_value[1:-1]
else:
var_value = temp_value
else:
var_value = temp_value
else:
var_value = arg
return var_name, var_value
###Output
_____no_output_____
###Markdown
Parse Args The commands that are used in the macros need to be parsed into their separate tokens. The tokens are separated by blanks and strings that enclosed in quotes are kept together.
###Code
def parseArgs(argin,_vars):
quoteChar = ""
inQuote = False
inArg = True
args = []
arg = ''
for ch in argin.lstrip():
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
arg = arg + ch #z
else:
arg = arg + ch
elif (ch == "\"" or ch == "\'"): # Do we have a quote
quoteChar = ch
arg = arg + ch #z
inQuote = True
elif (ch == " "):
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
else:
args.append("null")
arg = ""
else:
arg = arg + ch
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
return(args)
###Output
_____no_output_____
###Markdown
Run MacroThis code will execute the body of the macro and return the results for that macro call.
###Code
def runMacro(script,in_sql,tokens):
result = ""
runIT = True
code = script.split("\n")
level = 0
runlevel = [True,False,False,False,False,False,False,False,False,False]
ifcount = 0
_vars = {}
for i in range(0,len(tokens)):
vstr = str(i)
_vars[vstr] = tokens[i]
if (len(tokens) == 0):
_vars["argc"] = "0"
else:
_vars["argc"] = str(len(tokens)-1)
for line in code:
line = line.strip()
if (line == "" or line == "\n"): continue
if (line[0] == "#"): continue # A comment line starts with a # in the first position of the line
args = parseArgs(line,_vars) # Get all of the arguments
if (args[0] == "if"):
ifcount = ifcount + 1
if (runlevel[level] == False): # You can't execute this statement
continue
level = level + 1
if (len(args) < 4):
print("Macro: Incorrect number of arguments for the if clause.")
return insql
arg1 = args[1]
arg2 = args[3]
if (len(arg2) > 2):
ch1 = arg2[0]
ch2 = arg2[-1:]
if (ch1 in ['"',"'"] and ch1 == ch2):
arg2 = arg2[1:-1].strip()
op = args[2]
if (op in ["=","=="]):
if (arg1 == arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<=","=<"]):
if (arg1 <= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">=","=>"]):
if (arg1 >= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<>","!="]):
if (arg1 != arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<"]):
if (arg1 < arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">"]):
if (arg1 > arg2):
runlevel[level] = True
else:
runlevel[level] = False
else:
print("Macro: Unknown comparison operator in the if statement:" + op)
continue
elif (args[0] in ["exit","echo"] and runlevel[level] == True):
msg = ""
for msgline in args[1:]:
if (msg == ""):
msg = subvars(msgline,_vars)
else:
msg = msg + " " + subvars(msgline,_vars)
if (msg != ""):
if (args[0] == "echo"):
debug(msg,error=False)
else:
debug(msg,error=True)
if (args[0] == "exit"): return ''
elif (args[0] == "pass" and runlevel[level] == True):
pass
elif (args[0] == "var" and runlevel[level] == True):
value = ""
for val in args[2:]:
if (value == ""):
value = subvars(val,_vars)
else:
value = value + " " + subvars(val,_vars)
value.strip()
_vars[args[1]] = value
elif (args[0] == 'else'):
if (ifcount == level):
runlevel[level] = not runlevel[level]
elif (args[0] == 'return' and runlevel[level] == True):
return(result)
elif (args[0] == "endif"):
ifcount = ifcount - 1
if (ifcount < level):
level = level - 1
if (level < 0):
print("Macro: Unmatched if/endif pairs.")
return ''
else:
if (runlevel[level] == True):
if (result == ""):
result = subvars(line,_vars)
else:
result = result + "\n" + subvars(line,_vars)
return(result)
###Output
_____no_output_____
###Markdown
Substitute VarsThis routine is used by the runMacro program to track variables that are used within Macros. These are kept separate from the rest of the code.
###Code
def subvars(script,_vars):
if (_vars == None): return script
remainder = script
result = ""
done = False
while done == False:
bv = remainder.find("{")
if (bv == -1):
done = True
continue
ev = remainder.find("}")
if (ev == -1):
done = True
continue
result = result + remainder[:bv]
vvar = remainder[bv+1:ev]
remainder = remainder[ev+1:]
upper = False
allvars = False
if (vvar[0] == "^"):
upper = True
vvar = vvar[1:]
elif (vvar[0] == "*"):
vvar = vvar[1:]
allvars = True
else:
pass
if (vvar in _vars):
if (upper == True):
items = _vars[vvar].upper()
elif (allvars == True):
try:
iVar = int(vvar)
except:
return(script)
items = ""
sVar = str(iVar)
while sVar in _vars:
if (items == ""):
items = _vars[sVar]
else:
items = items + " " + _vars[sVar]
iVar = iVar + 1
sVar = str(iVar)
else:
items = _vars[vvar]
else:
if (allvars == True):
items = ""
else:
items = "null"
result = result + items
if (remainder != ""):
result = result + remainder
return(result)
###Output
_____no_output_____
###Markdown
SQL TimerThe calling format of this routine is:```count = sqlTimer(hdbc, runtime, inSQL)```This code runs the SQL string multiple times for one second (by default). The accuracy of the clock is not that great when you are running just one statement, so instead this routine will run the code multiple times for a second to give you an execution count. If you need to run the code for more than one second, the runtime value needs to be set to the number of seconds you want the code to run.The return result is always the number of times that the code executed. Note, that the program will skip reading the data if it is a SELECT statement so it doesn't included fetch time for the answer set.
###Code
def sqlTimer(hdbc, runtime, inSQL):
count = 0
t_end = time.time() + runtime
while time.time() < t_end:
try:
stmt = ibm_db.exec_immediate(hdbc,inSQL)
if (stmt == False):
db2_error(flag(["-q","-quiet"]))
return(-1)
ibm_db.free_result(stmt)
except Exception as err:
db2_error(False)
return(-1)
count = count + 1
return(count)
###Output
_____no_output_____
###Markdown
Split ArgsThis routine takes as an argument a string and then splits the arguments according to the following logic:* If the string starts with a `(` character, it will check the last character in the string and see if it is a `)` and then remove those characters* Every parameter is separated by a comma `,` and commas within quotes are ignored* Each parameter returned will have three values returned - one for the value itself, an indicator which will be either True if it was quoted, or False if not, and True or False if it is numeric.Example:``` "abcdef",abcdef,456,"856"```Three values would be returned:```[abcdef,True,False],[abcdef,False,False],[456,False,True],[856,True,False]```Any quoted string will be False for numeric. The way that the parameters are handled are up to the calling program. However, in the case of Db2, the quoted strings must be in single quotes so any quoted parameter using the double quotes `"` must be wrapped with single quotes. There is always a possibility that a string contains single quotes (i.e. O'Connor) so any substituted text should use `''` so that Db2 can properly interpret the string. This routine does not adjust the strings with quotes, and depends on the variable subtitution routine to do that.
###Code
def splitargs(arguments):
import types
# String the string and remove the ( and ) characters if they at the beginning and end of the string
results = []
step1 = arguments.strip()
if (len(step1) == 0): return(results) # Not much to do here - no args found
if (step1[0] == '('):
if (step1[-1:] == ')'):
step2 = step1[1:-1]
step2 = step2.strip()
else:
step2 = step1
else:
step2 = step1
# Now we have a string without brackets. Start scanning for commas
quoteCH = ""
pos = 0
arg = ""
args = []
while pos < len(step2):
ch = step2[pos]
if (quoteCH == ""): # Are we in a quote?
if (ch in ('"',"'")): # Check to see if we are starting a quote
quoteCH = ch
arg = arg + ch
pos += 1
elif (ch == ","): # Are we at the end of a parameter?
arg = arg.strip()
args.append(arg)
arg = ""
inarg = False
pos += 1
else: # Continue collecting the string
arg = arg + ch
pos += 1
else:
if (ch == quoteCH): # Are we at the end of a quote?
arg = arg + ch # Add the quote to the string
pos += 1 # Increment past the quote
quoteCH = "" # Stop quote checking (maybe!)
else:
pos += 1
arg = arg + ch
if (quoteCH != ""): # So we didn't end our string
arg = arg.strip()
args.append(arg)
elif (arg != ""): # Something left over as an argument
arg = arg.strip()
args.append(arg)
else:
pass
results = []
for arg in args:
result = []
if (len(arg) > 0):
if (arg[0] in ('"',"'")):
value = arg[1:-1]
isString = True
isNumber = False
else:
isString = False
isNumber = False
try:
value = eval(arg)
if (type(value) == int):
isNumber = True
elif (isinstance(value,float) == True):
isNumber = True
else:
value = arg
except:
value = arg
else:
value = ""
isString = False
isNumber = False
result = [value,isString,isNumber]
results.append(result)
return results
###Output
_____no_output_____
###Markdown
SQL ParserThe calling format of this routine is:```sql_cmd, parameter_list, encoded_sql = sqlParser(sql_input)```This code will look at the SQL string that has been passed to it and parse it into four values:- sql_cmd: First command in the list (so this may not be the actual SQL command)- parameter_list: the values of the parameters that need to passed to the execute/pandas code- encoded_sql: SQL with the parameters removed if there are any (replaced with ? markers)
###Code
def sqlParser(sqlin,local_ns):
sql_cmd = ""
encoded_sql = sqlin
firstCommand = "(?:^\s*)([a-zA-Z]+)(?:\s+.*|$)"
findFirst = re.match(firstCommand,sqlin)
if (findFirst == None): # We did not find a match so we just return the empty string
return sql_cmd, encoded_sql
cmd = findFirst.group(1)
sql_cmd = cmd.upper()
#
# Scan the input string looking for variables in the format :var. If no : is found just return.
# Var must be alpha+number+_ to be valid
#
if (':' not in sqlin): # A quick check to see if parameters are in here, but not fool-proof!
return sql_cmd, encoded_sql
inVar = False
inQuote = ""
varName = ""
encoded_sql = ""
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
for ch in sqlin:
if (inVar == True): # We are collecting the name of a variable
if (ch.upper() in "@_ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789[]"):
varName = varName + ch
continue
else:
if (varName == ""):
encode_sql = encoded_sql + ":"
elif (varName[0] in ('[',']')):
encoded_sql = encoded_sql + ":" + varName
else:
if (ch == '.'): # If the variable name is stopped by a period, assume no quotes are used
flag_quotes = False
else:
flag_quotes = True
varValue, varType = getContents(varName,flag_quotes,local_ns)
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == RAW):
encoded_sql = encoded_sql + varValue
elif (varType == LIST):
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
flag_quotes = True
try:
if (v.find('0x') == 0): # Just guessing this is a hex value at beginning
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
encoded_sql = encoded_sql + ch
varName = ""
inVar = False
elif (inQuote != ""):
encoded_sql = encoded_sql + ch
if (ch == inQuote): inQuote = ""
elif (ch in ("'",'"')):
encoded_sql = encoded_sql + ch
inQuote = ch
elif (ch == ":"): # This might be a variable
varName = ""
inVar = True
else:
encoded_sql = encoded_sql + ch
if (inVar == True):
varValue, varType = getContents(varName,True,local_ns) # We assume the end of a line is quoted
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == LIST):
flag_quotes = True
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
try:
if (v.find('0x') == 0): # Just guessing this is a hex value
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
return sql_cmd, encoded_sql
###Output
_____no_output_____
###Markdown
Variable Contents FunctionThe calling format of this routine is:```value = getContents(varName,quote,name_space)```This code will take the name of a variable as input and return the contents of that variable. If the variable is not found then the program will return None which is the equivalent to empty or null. Note that this function looks at the global variable pool for Python so it is possible that the wrong version of variable is returned if it is used in different functions. For this reason, any variables used in SQL statements should use a unique namimg convention if possible.The other thing that this function does is replace single quotes with two quotes. The reason for doing this is that Db2 will convert two single quotes into one quote when dealing with strings. This avoids problems when dealing with text that contains multiple quotes within the string. Note that this substitution is done only for single quote characters since the double quote character is used by Db2 for naming columns that are case sensitive or contain special characters.If the quote value is True, the field will have quotes around it. The name_space is the variables currently that are registered in Python.
###Code
def getContents(varName,flag_quotes,local_ns):
#
# Get the contents of the variable name that is passed to the routine. Only simple
# variables are checked, i.e. arrays and lists are not parsed
#
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
DICT = 4
try:
value = eval(varName,None,local_ns) # globals()[varName] # eval(varName)
except:
return(None,STRING)
if (isinstance(value,dict) == True): # Check to see if this is JSON dictionary
return(addquotes(value,flag_quotes),STRING)
elif(isinstance(value,list) == True): # List - tricky
return(value,LIST)
elif (isinstance(value,int) == True): # Integer value
return(value,NUMBER)
elif (isinstance(value,float) == True): # Float value
return(value,NUMBER)
else:
try:
# The pattern needs to be in the first position (0 in Python terms)
if (value.find('0x') == 0): # Just guessing this is a hex value
return(value,RAW)
else:
return(addquotes(value,flag_quotes),STRING) # String
except:
return(addquotes(str(value),flag_quotes),RAW)
###Output
_____no_output_____
###Markdown
Add QuotesQuotes are a challenge when dealing with dictionaries and Db2. Db2 wants strings delimited with single quotes, while Dictionaries use double quotes. That wouldn't be a problems except imbedded single quotes within these dictionaries will cause things to fail. This routine attempts to double-quote the single quotes within the dicitonary.
###Code
def addquotes(inString,flag_quotes):
if (isinstance(inString,dict) == True): # Check to see if this is JSON dictionary
serialized = json.dumps(inString)
else:
serialized = inString
# Replace single quotes with '' (two quotes) and wrap everything in single quotes
if (flag_quotes == False):
return(serialized)
else:
return("'"+serialized.replace("'","''")+"'") # Convert single quotes to two single quotes
###Output
_____no_output_____
###Markdown
Create the SAMPLE Database TablesThe calling format of this routine is:```db2_create_sample(quiet)```There are a lot of examples that depend on the data within the SAMPLE database. If you are running these examples and the connection is not to the SAMPLE database, then this code will create the two (EMPLOYEE, DEPARTMENT) tables that are used by most examples. If the function finds that these tables already exist, then nothing is done. If the tables are missing then they will be created with the same data as in the SAMPLE database.The quiet flag tells the program not to print any messages when the creation of the tables is complete.
###Code
def db2_create_sample(quiet):
create_department = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='DEPARTMENT' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE DEPARTMENT(DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6),ADMRDEPT CHAR(3) NOT NULL)');
EXECUTE IMMEDIATE('INSERT INTO DEPARTMENT VALUES
(''A00'',''SPIFFY COMPUTER SERVICE DIV.'',''000010'',''A00''),
(''B01'',''PLANNING'',''000020'',''A00''),
(''C01'',''INFORMATION CENTER'',''000030'',''A00''),
(''D01'',''DEVELOPMENT CENTER'',NULL,''A00''),
(''D11'',''MANUFACTURING SYSTEMS'',''000060'',''D01''),
(''D21'',''ADMINISTRATION SYSTEMS'',''000070'',''D01''),
(''E01'',''SUPPORT SERVICES'',''000050'',''A00''),
(''E11'',''OPERATIONS'',''000090'',''E01''),
(''E21'',''SOFTWARE SUPPORT'',''000100'',''E01''),
(''F22'',''BRANCH OFFICE F2'',NULL,''E01''),
(''G22'',''BRANCH OFFICE G2'',NULL,''E01''),
(''H22'',''BRANCH OFFICE H2'',NULL,''E01''),
(''I22'',''BRANCH OFFICE I2'',NULL,''E01''),
(''J22'',''BRANCH OFFICE J2'',NULL,''E01'')');
END IF;
END"""
%sql -d -q {create_department}
create_employee = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='EMPLOYEE' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE EMPLOYEE(
EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1),
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3),
PHONENO CHAR(4),
HIREDATE DATE,
JOB CHAR(8),
EDLEVEL SMALLINT NOT NULL,
SEX CHAR(1),
BIRTHDATE DATE,
SALARY DECIMAL(9,2),
BONUS DECIMAL(9,2),
COMM DECIMAL(9,2)
)');
EXECUTE IMMEDIATE('INSERT INTO EMPLOYEE VALUES
(''000010'',''CHRISTINE'',''I'',''HAAS'' ,''A00'',''3978'',''1995-01-01'',''PRES '',18,''F'',''1963-08-24'',152750.00,1000.00,4220.00),
(''000020'',''MICHAEL'' ,''L'',''THOMPSON'' ,''B01'',''3476'',''2003-10-10'',''MANAGER '',18,''M'',''1978-02-02'',94250.00,800.00,3300.00),
(''000030'',''SALLY'' ,''A'',''KWAN'' ,''C01'',''4738'',''2005-04-05'',''MANAGER '',20,''F'',''1971-05-11'',98250.00,800.00,3060.00),
(''000050'',''JOHN'' ,''B'',''GEYER'' ,''E01'',''6789'',''1979-08-17'',''MANAGER '',16,''M'',''1955-09-15'',80175.00,800.00,3214.00),
(''000060'',''IRVING'' ,''F'',''STERN'' ,''D11'',''6423'',''2003-09-14'',''MANAGER '',16,''M'',''1975-07-07'',72250.00,500.00,2580.00),
(''000070'',''EVA'' ,''D'',''PULASKI'' ,''D21'',''7831'',''2005-09-30'',''MANAGER '',16,''F'',''2003-05-26'',96170.00,700.00,2893.00),
(''000090'',''EILEEN'' ,''W'',''HENDERSON'' ,''E11'',''5498'',''2000-08-15'',''MANAGER '',16,''F'',''1971-05-15'',89750.00,600.00,2380.00),
(''000100'',''THEODORE'' ,''Q'',''SPENSER'' ,''E21'',''0972'',''2000-06-19'',''MANAGER '',14,''M'',''1980-12-18'',86150.00,500.00,2092.00),
(''000110'',''VINCENZO'' ,''G'',''LUCCHESSI'' ,''A00'',''3490'',''1988-05-16'',''SALESREP'',19,''M'',''1959-11-05'',66500.00,900.00,3720.00),
(''000120'',''SEAN'' ,'' '',''O`CONNELL'' ,''A00'',''2167'',''1993-12-05'',''CLERK '',14,''M'',''1972-10-18'',49250.00,600.00,2340.00),
(''000130'',''DELORES'' ,''M'',''QUINTANA'' ,''C01'',''4578'',''2001-07-28'',''ANALYST '',16,''F'',''1955-09-15'',73800.00,500.00,1904.00),
(''000140'',''HEATHER'' ,''A'',''NICHOLLS'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''000150'',''BRUCE'' ,'' '',''ADAMSON'' ,''D11'',''4510'',''2002-02-12'',''DESIGNER'',16,''M'',''1977-05-17'',55280.00,500.00,2022.00),
(''000160'',''ELIZABETH'',''R'',''PIANKA'' ,''D11'',''3782'',''2006-10-11'',''DESIGNER'',17,''F'',''1980-04-12'',62250.00,400.00,1780.00),
(''000170'',''MASATOSHI'',''J'',''YOSHIMURA'' ,''D11'',''2890'',''1999-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',44680.00,500.00,1974.00),
(''000180'',''MARILYN'' ,''S'',''SCOUTTEN'' ,''D11'',''1682'',''2003-07-07'',''DESIGNER'',17,''F'',''1979-02-21'',51340.00,500.00,1707.00),
(''000190'',''JAMES'' ,''H'',''WALKER'' ,''D11'',''2986'',''2004-07-26'',''DESIGNER'',16,''M'',''1982-06-25'',50450.00,400.00,1636.00),
(''000200'',''DAVID'' ,'' '',''BROWN'' ,''D11'',''4501'',''2002-03-03'',''DESIGNER'',16,''M'',''1971-05-29'',57740.00,600.00,2217.00),
(''000210'',''WILLIAM'' ,''T'',''JONES'' ,''D11'',''0942'',''1998-04-11'',''DESIGNER'',17,''M'',''2003-02-23'',68270.00,400.00,1462.00),
(''000220'',''JENNIFER'' ,''K'',''LUTZ'' ,''D11'',''0672'',''1998-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',49840.00,600.00,2387.00),
(''000230'',''JAMES'' ,''J'',''JEFFERSON'' ,''D21'',''2094'',''1996-11-21'',''CLERK '',14,''M'',''1980-05-30'',42180.00,400.00,1774.00),
(''000240'',''SALVATORE'',''M'',''MARINO'' ,''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''2002-03-31'',48760.00,600.00,2301.00),
(''000250'',''DANIEL'' ,''S'',''SMITH'' ,''D21'',''0961'',''1999-10-30'',''CLERK '',15,''M'',''1969-11-12'',49180.00,400.00,1534.00),
(''000260'',''SYBIL'' ,''P'',''JOHNSON'' ,''D21'',''8953'',''2005-09-11'',''CLERK '',16,''F'',''1976-10-05'',47250.00,300.00,1380.00),
(''000270'',''MARIA'' ,''L'',''PEREZ'' ,''D21'',''9001'',''2006-09-30'',''CLERK '',15,''F'',''2003-05-26'',37380.00,500.00,2190.00),
(''000280'',''ETHEL'' ,''R'',''SCHNEIDER'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1976-03-28'',36250.00,500.00,2100.00),
(''000290'',''JOHN'' ,''R'',''PARKER'' ,''E11'',''4502'',''2006-05-30'',''OPERATOR'',12,''M'',''1985-07-09'',35340.00,300.00,1227.00),
(''000300'',''PHILIP'' ,''X'',''SMITH'' ,''E11'',''2095'',''2002-06-19'',''OPERATOR'',14,''M'',''1976-10-27'',37750.00,400.00,1420.00),
(''000310'',''MAUDE'' ,''F'',''SETRIGHT'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''000320'',''RAMLAL'' ,''V'',''MEHTA'' ,''E21'',''9990'',''1995-07-07'',''FIELDREP'',16,''M'',''1962-08-11'',39950.00,400.00,1596.00),
(''000330'',''WING'' ,'' '',''LEE'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''M'',''1971-07-18'',45370.00,500.00,2030.00),
(''000340'',''JASON'' ,''R'',''GOUNOT'' ,''E21'',''5698'',''1977-05-05'',''FIELDREP'',16,''M'',''1956-05-17'',43840.00,500.00,1907.00),
(''200010'',''DIAN'' ,''J'',''HEMMINGER'' ,''A00'',''3978'',''1995-01-01'',''SALESREP'',18,''F'',''1973-08-14'',46500.00,1000.00,4220.00),
(''200120'',''GREG'' ,'' '',''ORLANDO'' ,''A00'',''2167'',''2002-05-05'',''CLERK '',14,''M'',''1972-10-18'',39250.00,600.00,2340.00),
(''200140'',''KIM'' ,''N'',''NATZ'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''200170'',''KIYOSHI'' ,'' '',''YAMAMOTO'' ,''D11'',''2890'',''2005-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',64680.00,500.00,1974.00),
(''200220'',''REBA'' ,''K'',''JOHN'' ,''D11'',''0672'',''2005-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',69840.00,600.00,2387.00),
(''200240'',''ROBERT'' ,''M'',''MONTEVERDE'',''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''1984-03-31'',37760.00,600.00,2301.00),
(''200280'',''EILEEN'' ,''R'',''SCHWARTZ'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1966-03-28'',46250.00,500.00,2100.00),
(''200310'',''MICHELLE'' ,''F'',''SPRINGER'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''200330'',''HELENA'' ,'' '',''WONG'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''F'',''1971-07-18'',35370.00,500.00,2030.00),
(''200340'',''ROY'' ,''R'',''ALONZO'' ,''E21'',''5698'',''1997-07-05'',''FIELDREP'',16,''M'',''1956-05-17'',31840.00,500.00,1907.00)');
END IF;
END"""
%sql -d -q {create_employee}
if (quiet == False): success("Sample tables [EMPLOYEE, DEPARTMENT] created.")
###Output
_____no_output_____
###Markdown
Check optionThis function will return the original string with the option removed, and a flag or true or false of the value is found.```args, flag = checkOption(option_string, option, false_value, true_value)```Options are specified with a -x where x is the character that we are searching for. It may actually be more than one character long like -pb/-pi/etc... The false and true values are optional. By default these are the boolean values of T/F but for some options it could be a character string like ';' versus '@' for delimiters.
###Code
def checkOption(args_in, option, vFalse=False, vTrue=True):
args_out = args_in.strip()
found = vFalse
if (args_out != ""):
if (args_out.find(option) >= 0):
args_out = args_out.replace(option," ")
args_out = args_out.strip()
found = vTrue
return args_out, found
###Output
_____no_output_____
###Markdown
Plot DataThis function will plot the data that is returned from the answer set. The plot value determines how we display the data. 1=Bar, 2=Pie, 3=Line, 4=Interactive.```plotData(flag_plot, hdbi, sql, parms)```The hdbi is the ibm_db_sa handle that is used by pandas dataframes to run the sql. The parms contains any of the parameters required to run the query.
###Code
def plotData(hdbi, sql):
try:
df = pandas.read_sql(sql,hdbi)
except Exception as err:
db2_error(False)
return
if df.empty:
errormsg("No results returned")
return
col_count = len(df.columns)
if flag(["-pb","-bar"]): # Plot 1 = bar chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='bar');
_ = plt.plot();
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
df.plot(kind='bar',x=xlabel,y=ylabel);
_ = plt.plot();
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot.bar();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pp","-pie"]): # Plot 2 = pie chart
if (col_count in (1,2)):
if (col_count == 1):
df.index = df.index + 1
yname = df.columns.values[0]
_ = df.plot(kind='pie',y=yname);
else:
xlabel = df.columns.values[0]
xname = df[xlabel].tolist()
yname = df.columns.values[1]
_ = df.plot(kind='pie',y=yname,labels=xname);
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pl","-line"]): # Plot 3 = line chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='line');
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
_ = df.plot(kind='line',x=xlabel,y=ylabel) ;
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot();
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
else:
return
###Output
_____no_output_____
###Markdown
Find a ProcedureThis routine will check to see if a procedure exists with the SCHEMA/NAME (or just NAME if no schema is supplied) and returns the number of answer sets returned. Possible values are 0, 1 (or greater) or None. If None is returned then we can't find the procedure anywhere.
###Code
def findProc(procname):
global _hdbc, _hdbi, _connected, _runtime
# Split the procedure name into schema.procname if appropriate
upper_procname = procname.upper()
schema, proc = split_string(upper_procname,".") # Expect schema.procname
if (proc == None):
proc = schema
# Call ibm_db.procedures to see if the procedure does exist
schema = "%"
try:
stmt = ibm_db.procedures(_hdbc, None, schema, proc)
if (stmt == False): # Error executing the code
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
result = ibm_db.fetch_tuple(stmt)
resultsets = result[5]
if (resultsets >= 1): resultsets = 1
return resultsets
except Exception as err:
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
###Output
_____no_output_____
###Markdown
Parse Call ArgumentsThis code will parse a SQL call name(parm1,...) and return the name and the parameters in the call.
###Code
def parseCallArgs(macro):
quoteChar = ""
inQuote = False
inParm = False
ignore = False
name = ""
parms = []
parm = ''
sqlin = macro.replace("\n","")
sqlin.lstrip()
for ch in sqlin:
if (inParm == False):
# We hit a blank in the name, so ignore everything after the procedure name until a ( is found
if (ch == " "):
ignore == True
elif (ch == "("): # Now we have parameters to send to the stored procedure
inParm = True
else:
if (ignore == False): name = name + ch # The name of the procedure (and no blanks)
else:
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
else:
parm = parm + ch
elif (ch in ("\"","\'","[")): # Do we have a quote
if (ch == "["):
quoteChar = "]"
else:
quoteChar = ch
inQuote = True
elif (ch == ")"):
if (parm != ""):
parms.append(parm)
parm = ""
break
elif (ch == ","):
if (parm != ""):
parms.append(parm)
else:
parms.append("null")
parm = ""
else:
parm = parm + ch
if (inParm == True):
if (parm != ""):
parms.append(parm_value)
return(name,parms)
###Output
_____no_output_____
###Markdown
Get ColumnsGiven a statement handle, determine what the column names are or the data types.
###Code
def getColumns(stmt):
columns = []
types = []
colcount = 0
try:
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
while (colname != False):
columns.append(colname)
types.append(coltype)
colcount += 1
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
return columns,types
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Call a ProcedureThe CALL statement is used for execution of a stored procedure. The format of the CALL statement is:```CALL PROC_NAME(x,y,z,...)```Procedures allow for the return of answer sets (cursors) as well as changing the contents of the parameters being passed to the procedure. In this implementation, the CALL function is limited to returning one answer set (or nothing). If you want to use more complex stored procedures then you will have to use the native python libraries.
###Code
def parseCall(hdbc, inSQL, local_ns):
global _hdbc, _hdbi, _connected, _runtime, _environment
# Check to see if we are connected first
if (_connected == False): # Check if you are connected
db2_doConnect()
if _connected == False: return None
remainder = inSQL.strip()
procName, procArgs = parseCallArgs(remainder[5:]) # Assume that CALL ... is the format
resultsets = findProc(procName)
if (resultsets == None): return None
argvalues = []
if (len(procArgs) > 0): # We have arguments to consider
for arg in procArgs:
varname = arg
if (len(varname) > 0):
if (varname[0] == ":"):
checkvar = varname[1:]
varvalue = getContents(checkvar,True,local_ns)
if (varvalue == None):
errormsg("Variable " + checkvar + " is not defined.")
return None
argvalues.append(varvalue)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
else:
argvalues.append(None)
try:
if (len(procArgs) > 0):
argtuple = tuple(argvalues)
result = ibm_db.callproc(_hdbc,procName,argtuple)
stmt = result[0]
else:
result = ibm_db.callproc(_hdbc,procName)
stmt = result
if (resultsets != 0 and stmt != None):
columns, types = getColumns(stmt)
if (columns == None): return None
rows = []
rowlist = ibm_db.fetch_tuple(stmt)
while ( rowlist ) :
row = []
colcount = 0
for col in rowlist:
try:
if (types[colcount] in ["int","bigint"]):
row.append(int(col))
elif (types[colcount] in ["decimal","real"]):
row.append(float(col))
elif (types[colcount] in ["date","time","timestamp"]):
row.append(str(col))
else:
row.append(col)
except:
row.append(col)
colcount += 1
rows.append(row)
rowlist = ibm_db.fetch_tuple(stmt)
if flag(["-r","-array"]):
rows.insert(0,columns)
if len(procArgs) > 0:
allresults = []
allresults.append(rows)
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return rows
else:
df = pandas.DataFrame.from_records(rows,columns=columns)
if flag("-grid") or _settings['display'] == 'GRID':
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
return df
else:
if len(procArgs) > 0:
allresults = []
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return None
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Parse Prepare/ExecuteThe PREPARE statement is used for repeated execution of a SQL statement. The PREPARE statement has the format:```stmt = PREPARE SELECT EMPNO FROM EMPLOYEE WHERE WORKDEPT=? AND SALARY<?```The SQL statement that you want executed is placed after the PREPARE statement with the location of variables marked with ? (parameter) markers. The variable stmt contains the prepared statement that need to be passed to the EXECUTE statement. The EXECUTE statement has the format:```EXECUTE :x USING z, y, s ```The first variable (:x) is the name of the variable that you assigned the results of the prepare statement. The values after the USING clause are substituted into the prepare statement where the ? markers are found. If the values in USING clause are variable names (z, y, s), a **link** is created to these variables as part of the execute statement. If you use the variable subsitution form of variable name (:z, :y, :s), the **contents** of the variable are placed into the USING clause. Normally this would not make much of a difference except when you are dealing with binary strings or JSON strings where the quote characters may cause some problems when subsituted into the statement.
###Code
def parsePExec(hdbc, inSQL):
import ibm_db
global _stmt, _stmtID, _stmtSQL, sqlcode
cParms = inSQL.split()
parmCount = len(cParms)
if (parmCount == 0): return(None) # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "PREPARE"): # Prepare the following SQL
uSQL = inSQL.upper()
found = uSQL.find("PREPARE")
sql = inSQL[found+7:].strip()
try:
pattern = "\?\*[0-9]+"
findparm = re.search(pattern,sql)
while findparm != None:
found = findparm.group(0)
count = int(found[2:])
markers = ('?,' * count)[:-1]
sql = sql.replace(found,markers)
findparm = re.search(pattern,sql)
stmt = ibm_db.prepare(hdbc,sql) # Check error code here
if (stmt == False):
db2_error(False)
return(False)
stmttext = str(stmt).strip()
stmtID = stmttext[33:48].strip()
if (stmtID in _stmtID) == False:
_stmt.append(stmt) # Prepare and return STMT to caller
_stmtID.append(stmtID)
else:
stmtIX = _stmtID.index(stmtID)
_stmt[stmtiX] = stmt
return(stmtID)
except Exception as err:
print(err)
db2_error(False)
return(False)
if (keyword == "EXECUTE"): # Execute the prepare statement
if (parmCount < 2): return(False) # No stmtID available
stmtID = cParms[1].strip()
if (stmtID in _stmtID) == False:
errormsg("Prepared statement not found or invalid.")
return(False)
stmtIX = _stmtID.index(stmtID)
stmt = _stmt[stmtIX]
try:
if (parmCount == 2): # Only the statement handle available
result = ibm_db.execute(stmt) # Run it
elif (parmCount == 3): # Not quite enough arguments
errormsg("Missing or invalid USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
else:
using = cParms[2].upper()
if (using != "USING"): # Bad syntax again
errormsg("Missing USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
uSQL = inSQL.upper()
found = uSQL.find("USING")
parmString = inSQL[found+5:].strip()
parmset = splitargs(parmString)
if (len(parmset) == 0):
errormsg("Missing parameters after the USING clause.")
sqlcode = -99999
return(False)
parms = []
parm_count = 0
CONSTANT = 0
VARIABLE = 1
const = [0]
const_cnt = 0
for v in parmset:
parm_count = parm_count + 1
if (v[1] == True or v[2] == True): # v[1] true if string, v[2] true if num
parm_type = CONSTANT
const_cnt = const_cnt + 1
if (v[2] == True):
if (isinstance(v[0],int) == True): # Integer value
sql_type = ibm_db.SQL_INTEGER
elif (isinstance(v[0],float) == True): # Float value
sql_type = ibm_db.SQL_DOUBLE
else:
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
const.append(v[0])
else:
parm_type = VARIABLE
# See if the variable has a type associated with it varname@type
varset = v[0].split("@")
parm_name = varset[0]
parm_datatype = "char"
# Does the variable exist?
if (parm_name not in globals()):
errormsg("SQL Execute parameter " + parm_name + " not found")
sqlcode = -99999
return(false)
if (len(varset) > 1): # Type provided
parm_datatype = varset[1]
if (parm_datatype == "dec" or parm_datatype == "decimal"):
sql_type = ibm_db.SQL_DOUBLE
elif (parm_datatype == "bin" or parm_datatype == "binary"):
sql_type = ibm_db.SQL_BINARY
elif (parm_datatype == "int" or parm_datatype == "integer"):
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
try:
if (parm_type == VARIABLE):
result = ibm_db.bind_param(stmt, parm_count, globals()[parm_name], ibm_db.SQL_PARAM_INPUT, sql_type)
else:
result = ibm_db.bind_param(stmt, parm_count, const[const_cnt], ibm_db.SQL_PARAM_INPUT, sql_type)
except:
result = False
if (result == False):
errormsg("SQL Bind on variable " + parm_name + " failed.")
sqlcode = -99999
return(false)
result = ibm_db.execute(stmt) # ,tuple(parms))
if (result == False):
errormsg("SQL Execute failed.")
return(False)
if (ibm_db.num_fields(stmt) == 0): return(True) # Command successfully completed
return(fetchResults(stmt))
except Exception as err:
db2_error(False)
return(False)
return(False)
return(False)
###Output
_____no_output_____
###Markdown
Fetch Result SetThis code will take the stmt handle and then produce a result set of rows as either an array (`-r`,`-array`) or as an array of json records (`-json`).
###Code
def fetchResults(stmt):
global sqlcode
rows = []
columns, types = getColumns(stmt)
# By default we assume that the data will be an array
is_array = True
# Check what type of data we want returned - array or json
if (flag(["-r","-array"]) == False):
# See if we want it in JSON format, if not it remains as an array
if (flag("-json") == True):
is_array = False
# Set column names to lowercase for JSON records
if (is_array == False):
columns = [col.lower() for col in columns] # Convert to lowercase for each of access
# First row of an array has the column names in it
if (is_array == True):
rows.append(columns)
result = ibm_db.fetch_tuple(stmt)
rowcount = 0
while (result):
rowcount += 1
if (is_array == True):
row = []
else:
row = {}
colcount = 0
for col in result:
try:
if (types[colcount] in ["int","bigint"]):
if (is_array == True):
row.append(int(col))
else:
row[columns[colcount]] = int(col)
elif (types[colcount] in ["decimal","real"]):
if (is_array == True):
row.append(float(col))
else:
row[columns[colcount]] = float(col)
elif (types[colcount] in ["date","time","timestamp"]):
if (is_array == True):
row.append(str(col))
else:
row[columns[colcount]] = str(col)
else:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
except:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
colcount += 1
rows.append(row)
result = ibm_db.fetch_tuple(stmt)
if (rowcount == 0):
sqlcode = 100
else:
sqlcode = 0
return rows
###Output
_____no_output_____
###Markdown
Parse CommitThere are three possible COMMIT verbs that can bs used:- COMMIT [WORK] - Commit the work in progress - The WORK keyword is not checked for- ROLLBACK - Roll back the unit of work- AUTOCOMMIT ON/OFF - Are statements committed on or off?The statement is passed to this routine and then checked.
###Code
def parseCommit(sql):
global _hdbc, _hdbi, _connected, _runtime, _stmt, _stmtID, _stmtSQL
if (_connected == False): return # Nothing to do if we are not connected
cParms = sql.split()
if (len(cParms) == 0): return # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "COMMIT"): # Commit the work that was done
try:
result = ibm_db.commit (_hdbc) # Commit the connection
if (len(cParms) > 1):
keyword = cParms[1].upper()
if (keyword == "HOLD"):
return
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "ROLLBACK"): # Rollback the work that was done
try:
result = ibm_db.rollback(_hdbc) # Rollback the connection
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "AUTOCOMMIT"): # Is autocommit on or off
if (len(cParms) > 1):
op = cParms[1].upper() # Need ON or OFF value
else:
return
try:
if (op == "OFF"):
ibm_db.autocommit(_hdbc, False)
elif (op == "ON"):
ibm_db.autocommit (_hdbc, True)
return
except Exception as err:
db2_error(False)
return
return
###Output
_____no_output_____
###Markdown
Set FlagsThis code will take the input SQL block and update the global flag list. The global flag list is just a list of options that are set at the beginning of a code block. The absence of a flag means it is false. If it exists it is true.
###Code
def setFlags(inSQL):
global _flags
_flags = [] # Delete all of the current flag settings
pos = 0
end = len(inSQL)-1
inFlag = False
ignore = False
outSQL = ""
flag = ""
while (pos <= end):
ch = inSQL[pos]
if (ignore == True):
outSQL = outSQL + ch
else:
if (inFlag == True):
if (ch != " "):
flag = flag + ch
else:
_flags.append(flag)
inFlag = False
else:
if (ch == "-"):
flag = "-"
inFlag = True
elif (ch == ' '):
outSQL = outSQL + ch
else:
outSQL = outSQL + ch
ignore = True
pos += 1
if (inFlag == True):
_flags.append(flag)
return outSQL
###Output
_____no_output_____
###Markdown
Check to see if flag ExistsThis function determines whether or not a flag exists in the global flag array. Absence of a value means it is false. The parameter can be a single value, or an array of values.
###Code
def flag(inflag):
global _flags
if isinstance(inflag,list):
for x in inflag:
if (x in _flags):
return True
return False
else:
if (inflag in _flags):
return True
else:
return False
###Output
_____no_output_____
###Markdown
Generate a list of SQL lines based on a delimiterNote that this function will make sure that quotes are properly maintained so that delimiters inside of quoted strings do not cause errors.
###Code
def splitSQL(inputString, delimiter):
pos = 0
arg = ""
results = []
quoteCH = ""
inSQL = inputString.strip()
if (len(inSQL) == 0): return(results) # Not much to do here - no args found
while pos < len(inSQL):
ch = inSQL[pos]
pos += 1
if (ch in ('"',"'")): # Is this a quote characters?
arg = arg + ch # Keep appending the characters to the current arg
if (ch == quoteCH): # Is this quote character we are in
quoteCH = ""
elif (quoteCH == ""): # Create the quote
quoteCH = ch
else:
None
elif (quoteCH != ""): # Still in a quote
arg = arg + ch
elif (ch == delimiter): # Is there a delimiter?
results.append(arg)
arg = ""
else:
arg = arg + ch
if (arg != ""):
results.append(arg)
return(results)
###Output
_____no_output_____
###Markdown
Main %sql Magic DefinitionThe main %sql Magic logic is found in this section of code. This code will register the Magic command and allow Jupyter notebooks to interact with Db2 by using this extension.
###Code
@magics_class
class DB2(Magics):
@needs_local_scope
@line_cell_magic
def sql(self, line, cell=None, local_ns=None):
# Before we event get started, check to see if you have connected yet. Without a connection we
# can't do anything. You may have a connection request in the code, so if that is true, we run those,
# otherwise we connect immediately
# If your statement is not a connect, and you haven't connected, we need to do it for you
global _settings, _environment
global _hdbc, _hdbi, _connected, _runtime, sqlstate, sqlerror, sqlcode, sqlelapsed
# If you use %sql (line) we just run the SQL. If you use %%SQL the entire cell is run.
flag_cell = False
flag_output = False
sqlstate = "0"
sqlerror = ""
sqlcode = 0
sqlelapsed = 0
start_time = time.time()
end_time = time.time()
# Macros gets expanded before anything is done
SQL1 = setFlags(line.strip())
SQL1 = checkMacro(SQL1) # Update the SQL if any macros are in there
SQL2 = cell
if flag("-sampledata"): # Check if you only want sample data loaded
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
db2_create_sample(flag(["-q","-quiet"]))
return
if SQL1 == "?" or flag(["-h","-help"]): # Are you asking for help
sqlhelp()
return
if len(SQL1) == 0 and SQL2 == None: return # Nothing to do here
# Check for help
if SQL1.upper() == "? CONNECT": # Are you asking for help on CONNECT
connected_help()
return
sqlType,remainder = sqlParser(SQL1,local_ns) # What type of command do you have?
if (sqlType == "CONNECT"): # A connect request
parseConnect(SQL1,local_ns)
return
elif (sqlType == "DEFINE"): # Create a macro from the body
result = setMacro(SQL2,remainder)
return
elif (sqlType == "OPTION"):
setOptions(SQL1)
return
elif (sqlType == 'COMMIT' or sqlType == 'ROLLBACK' or sqlType == 'AUTOCOMMIT'):
parseCommit(remainder)
return
elif (sqlType == "PREPARE"):
pstmt = parsePExec(_hdbc, remainder)
return(pstmt)
elif (sqlType == "EXECUTE"):
result = parsePExec(_hdbc, remainder)
return(result)
elif (sqlType == "CALL"):
result = parseCall(_hdbc, remainder, local_ns)
return(result)
else:
pass
sql = SQL1
if (sql == ""): sql = SQL2
if (sql == ""): return # Nothing to do here
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
if _settings["maxrows"] == -1: # Set the return result size
pandas.reset_option('display.max_rows')
else:
pandas.options.display.max_rows = _settings["maxrows"]
runSQL = re.sub('.*?--.*$',"",sql,flags=re.M)
remainder = runSQL.replace("\n"," ")
if flag(["-d","-delim"]):
sqlLines = splitSQL(remainder,"@")
else:
sqlLines = splitSQL(remainder,";")
flag_cell = True
# For each line figure out if you run it as a command (db2) or select (sql)
for sqlin in sqlLines: # Run each command
sqlin = checkMacro(sqlin) # Update based on any macros
sqlType, sql = sqlParser(sqlin,local_ns) # Parse the SQL
if (sql.strip() == ""): continue
if flag(["-e","-echo"]): debug(sql,False)
if flag("-t"):
cnt = sqlTimer(_hdbc, _settings["runtime"], sql) # Given the sql and parameters, clock the time
if (cnt >= 0): print("Total iterations in %s second(s): %s" % (_settings["runtime"],cnt))
return(cnt)
elif flag(["-pb","-bar","-pp","-pie","-pl","-line"]): # We are plotting some results
plotData(_hdbi, sql) # Plot the data and return
return
else:
try: # See if we have an answer set
stmt = ibm_db.prepare(_hdbc,sql)
if (ibm_db.num_fields(stmt) == 0): # No, so we just execute the code
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
continue
rowcount = ibm_db.num_rows(stmt)
if (rowcount == 0 and flag(["-q","-quiet"]) == False):
errormsg("No rows found.")
continue # Continue running
elif flag(["-r","-array","-j","-json"]): # raw, json, format json
row_count = 0
resultSet = []
try:
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
return
if flag("-j"): # JSON single output
row_count = 0
json_results = []
while( ibm_db.fetch_row(stmt) ):
row_count = row_count + 1
jsonVal = ibm_db.result(stmt,0)
jsonDict = json.loads(jsonVal)
json_results.append(jsonDict)
flag_output = True
if (row_count == 0): sqlcode = 100
return(json_results)
else:
return(fetchResults(stmt))
except Exception as err:
db2_error(flag(["-q","-quiet"]))
return
else:
try:
df = pandas.read_sql(sql,_hdbi)
except Exception as err:
db2_error(False)
return
if (len(df) == 0):
sqlcode = 100
if (flag(["-q","-quiet"]) == False):
errormsg("No rows found")
continue
flag_output = True
if flag("-grid") or _settings['display'] == 'GRID': # Check to see if we can display the results
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
print(df.to_string())
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
pandas.options.display.max_rows = None
pandas.options.display.max_columns = None
return df # print(df.to_string())
else:
pandas.options.display.max_rows = _settings["maxrows"]
pandas.options.display.max_columns = None
return df # pdisplay(df) # print(df.to_string())
except:
db2_error(flag(["-q","-quiet"]))
continue # return
end_time = time.time()
sqlelapsed = end_time - start_time
if (flag_output == False and flag(["-q","-quiet"]) == False): print("Command completed.")
# Register the Magic extension in Jupyter
ip = get_ipython()
ip.register_magics(DB2)
load_settings()
success("Db2 Extensions Loaded.")
###Output
_____no_output_____
###Markdown
Pre-defined MacrosThese macros are used to simulate the LIST TABLES and DESCRIBE commands that are available from within the Db2 command line.
###Code
%%sql define LIST
#
# The LIST macro is used to list all of the tables in the current schema or for all schemas
#
var syntax Syntax: LIST TABLES [FOR ALL | FOR SCHEMA name]
#
# Only LIST TABLES is supported by this macro
#
if {^1} <> 'TABLES'
exit {syntax}
endif
#
# This SQL is a temporary table that contains the description of the different table types
#
WITH TYPES(TYPE,DESCRIPTION) AS (
VALUES
('A','Alias'),
('G','Created temporary table'),
('H','Hierarchy table'),
('L','Detached table'),
('N','Nickname'),
('S','Materialized query table'),
('T','Table'),
('U','Typed table'),
('V','View'),
('W','Typed view')
)
SELECT TABNAME, TABSCHEMA, T.DESCRIPTION FROM SYSCAT.TABLES S, TYPES T
WHERE T.TYPE = S.TYPE
#
# Case 1: No arguments - LIST TABLES
#
if {argc} == 1
AND OWNER = CURRENT USER
ORDER BY TABNAME, TABSCHEMA
return
endif
#
# Case 2: Need 3 arguments - LIST TABLES FOR ALL
#
if {argc} == 3
if {^2}&{^3} == 'FOR&ALL'
ORDER BY TABNAME, TABSCHEMA
return
endif
exit {syntax}
endif
#
# Case 3: Need FOR SCHEMA something here
#
if {argc} == 4
if {^2}&{^3} == 'FOR&SCHEMA'
AND TABSCHEMA = '{^4}'
ORDER BY TABNAME, TABSCHEMA
return
else
exit {syntax}
endif
endif
#
# Nothing matched - Error
#
exit {syntax}
%%sql define describe
#
# The DESCRIBE command can either use the syntax DESCRIBE TABLE <name> or DESCRIBE TABLE SELECT ...
#
var syntax Syntax: DESCRIBE [TABLE name | SELECT statement]
#
# Check to see what count of variables is... Must be at least 2 items DESCRIBE TABLE x or SELECT x
#
if {argc} < 2
exit {syntax}
endif
CALL ADMIN_CMD('{*0}');
###Output
_____no_output_____
###Markdown
Set the table formatting to left align a table in a cell. By default, tables are centered in a cell. Remove this cell if you don't want to change Jupyter notebook formatting for tables. In addition, we skip this code if you are running in a shell environment rather than a Jupyter notebook
###Code
#%%html
#<style>
# table {margin-left: 0 !important; text-align: left;}
#</style>
###Output
_____no_output_____
###Markdown
DB2 Jupyter Notebook ExtensionsVersion: 2020-04-05 This code is imported as a Jupyter notebook extension in any notebooks you create with DB2 code in it. Place the following line of code in any notebook that you want to use these commands with:&37;run db2.ipynbThis code defines a Jupyter/Python magic command called `%sql` which allows you to execute DB2 specific calls to the database. There are other packages available for manipulating databases, but this one has been specificallydesigned for demonstrating a number of the SQL features available in DB2.There are two ways of executing the `%sql` command. A single line SQL statement would use theline format of the magic command:%sql SELECT * FROM EMPLOYEEIf you have a large block of sql then you would place the %%sql command at the beginning of the block and thenplace the SQL statements into the remainder of the block. Using this form of the `%%sql` statement means that thenotebook cell can only contain SQL and no other statements.%%sqlSELECT * FROM EMPLOYEEORDER BY LASTNAMEYou can have multiple lines in the SQL block (`%%sql`). The default SQL delimiter is the semi-column (`;`).If you have scripts (triggers, procedures, functions) that use the semi-colon as part of the script, you will need to use the `-d` option to change the delimiter to an at "`@`" sign. %%sql -dSELECT * FROM EMPLOYEE@CREATE PROCEDURE ...@The `%sql` command allows most DB2 commands to execute and has a special version of the CONNECT statement. A CONNECT by itself will attempt to reconnect to the database using previously used settings. If it cannot connect, it will prompt the user for additional information. The CONNECT command has the following format:%sql CONNECT TO <database> USER <userid> USING <password | ?> HOST <ip address> PORT <port number>If you use a "`?`" for the password field, the system will prompt you for a password. This avoids typing the password as clear text on the screen. If a connection is not successful, the system will print the errormessage associated with the connect request.If the connection is successful, the parameters are saved on your system and will be used the next time yourun a SQL statement, or when you issue the %sql CONNECT command with no parameters. In addition to the -d option, there are a number different options that you can specify at the beginning of the SQL: - `-d, -delim` - Change SQL delimiter to "`@`" from "`;`" - `-q, -quiet` - Quiet results - no messages returned from the function - `-r, -array` - Return the result set as an array of values instead of a dataframe - `-t, -time` - Time the following SQL statement and return the number of times it executes in 1 second - `-j` - Format the first character column of the result set as a JSON record - `-json` - Return result set as an array of json records - `-a, -all` - Return all rows in answer set and do not limit display - `-grid` - Display the results in a scrollable grid - `-pb, -bar` - Plot the results as a bar chart - `-pl, -line` - Plot the results as a line chart - `-pp, -pie` - Plot the results as a pie chart - `-e, -echo` - Any macro expansions are displayed in an output box - `-sampledata` - Create and load the EMPLOYEE and DEPARTMENT tablesYou can pass python variables to the `%sql` command by using the `{}` braces with the name of thevariable inbetween. Note that you will need to place proper punctuation around the variable in the event theSQL command requires it. For instance, the following example will find employee '000010' in the EMPLOYEE table.empno = '000010'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO='{empno}'The other option is to use parameter markers. What you would need to do is use the name of the variable with a colon in front of it and the program will prepare the statement and then pass the variable to Db2 when the statement is executed. This allows you to create complex strings that might contain quote characters and other special characters and not have to worry about enclosing the string with the correct quotes. Note that you do not place the quotes around the variable even though it is a string.empno = '000020'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=:empno Development SQLThe previous set of `%sql` and `%%sql` commands deals with SQL statements and commands that are run in an interactive manner. There is a class of SQL commands that are more suited to a development environment where code is iterated or requires changing input. The commands that are associated with this form of SQL are:- AUTOCOMMIT- COMMIT/ROLLBACK- PREPARE - EXECUTEAutocommit is the default manner in which SQL statements are executed. At the end of the successful completion of a statement, the results are commited to the database. There is no concept of a transaction where multiple DML/DDL statements are considered one transaction. The `AUTOCOMMIT` command allows you to turn autocommit `OFF` or `ON`. This means that the set of SQL commands run after the `AUTOCOMMIT OFF` command are executed are not commited to the database until a `COMMIT` or `ROLLBACK` command is issued.`COMMIT` (`WORK`) will finalize all of the transactions (`COMMIT`) to the database and `ROLLBACK` will undo all of the changes. If you issue a `SELECT` statement during the execution of your block, the results will reflect all of your changes. If you `ROLLBACK` the transaction, the changes will be lost.`PREPARE` is typically used in a situation where you want to repeatidly execute a SQL statement with different variables without incurring the SQL compilation overhead. For instance:```x = %sql PREPARE SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=?for y in ['000010','000020','000030']: %sql execute :x using :y````EXECUTE` is used to execute a previously compiled statement. To retrieve the error codes that might be associated with any SQL call, the following variables are updated after every call:* SQLCODE* SQLSTATE* SQLERROR - Full error message retrieved from Db2 Install Db2 Python DriverIf the ibm_db driver is not installed on your system, the subsequent Db2 commands will fail. In order to install the Db2 driver, issue the following command from a Jupyter notebook cell:```!pip install --user ibm_db``` Db2 Jupyter ExtensionsThis section of code has the import statements and global variables defined for the remainder of the functions.
###Code
#
# Set up Jupyter MAGIC commands "sql".
# %sql will return results from a DB2 select statement or execute a DB2 command
#
# IBM 2019: George Baklarz
# Version 2019-10-03
#
from __future__ import print_function
from IPython.display import HTML as pHTML, Image as pImage, display as pdisplay, Javascript as Javascript
from IPython.core.magic import (Magics, magics_class, line_magic,
cell_magic, line_cell_magic, needs_local_scope)
import ibm_db
import pandas
import ibm_db_dbi
import json
import matplotlib
import matplotlib.pyplot as plt
import getpass
import os
import pickle
import time
import sys
import re
import warnings
warnings.filterwarnings("ignore")
# Python Hack for Input between 2 and 3
try:
input = raw_input
except NameError:
pass
_settings = {
"maxrows" : 10,
"maxgrid" : 5,
"runtime" : 1,
"display" : "PANDAS",
"database" : "",
"hostname" : "localhost",
"port" : "50000",
"protocol" : "TCPIP",
"uid" : "DB2INST1",
"pwd" : "password",
"ssl" : ""
}
_environment = {
"jupyter" : True,
"qgrid" : True
}
_display = {
'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': False,
'defaultColumnWidth': 150,
'rowHeight': 28,
'enableColumnReorder': False,
'enableTextSelectionOnCells': True,
'editable': False,
'autoEdit': False,
'explicitInitialization': True,
'maxVisibleRows': 5,
'minVisibleRows': 5,
'sortable': True,
'filterable': False,
'highlightSelectedCell': False,
'highlightSelectedRow': True
}
# Connection settings for statements
_connected = False
_hdbc = None
_hdbi = None
_stmt = []
_stmtID = []
_stmtSQL = []
_vars = {}
_macros = {}
_flags = []
_debug = False
# Db2 Error Messages and Codes
sqlcode = 0
sqlstate = "0"
sqlerror = ""
sqlelapsed = 0
# Check to see if QGrid is installed
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
# Check if we are running in iPython or Jupyter
try:
if (get_ipython().config == {}):
_environment['jupyter'] = False
_environment['qgrid'] = False
else:
_environment['jupyter'] = True
except:
_environment['jupyter'] = False
_environment['qgrid'] = False
###Output
_____no_output_____
###Markdown
OptionsThere are four options that can be set with the **`%sql`** command. These options are shown below with the default value shown in parenthesis.- **`MAXROWS n (10)`** - The maximum number of rows that will be displayed before summary information is shown. If the answer set is less than this number of rows, it will be completely shown on the screen. If the answer set is larger than this amount, only the first 5 rows and last 5 rows of the answer set will be displayed. If you want to display a very large answer set, you may want to consider using the grid option `-g` to display the results in a scrollable table. If you really want to show all results then setting MAXROWS to -1 will return all output.- **`MAXGRID n (5)`** - The maximum size of a grid display. When displaying a result set in a grid `-g`, the default size of the display window is 5 rows. You can set this to a larger size so that more rows are shown on the screen. Note that the minimum size always remains at 5 which means that if the system is unable to display your maximum row size it will reduce the table display until it fits.- **`DISPLAY PANDAS | GRID (PANDAS)`** - Display the results as a PANDAS dataframe (default) or as a scrollable GRID- **`RUNTIME n (1)`** - When using the timer option on a SQL statement, the statement will execute for **`n`** number of seconds. The result that is returned is the number of times the SQL statement executed rather than the execution time of the statement. The default value for runtime is one second, so if the SQL is very complex you will need to increase the run time.- **`LIST`** - Display the current settingsTo set an option use the following syntax:```%sql option option_name value option_name value ....```The following example sets all options:```%sql option maxrows 100 runtime 2 display grid maxgrid 10```The values will **not** be saved between Jupyter notebooks sessions. If you need to retrieve the current options values, use the LIST command as the only argument:```%sql option list```
###Code
def setOptions(inSQL):
global _settings, _display
cParms = inSQL.split()
cnt = 0
while cnt < len(cParms):
if cParms[cnt].upper() == 'MAXROWS':
if cnt+1 < len(cParms):
try:
_settings["maxrows"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid MAXROWS value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'MAXGRID':
if cnt+1 < len(cParms):
try:
maxgrid = int(cParms[cnt+1])
if (maxgrid <= 5): # Minimum window size is 5
maxgrid = 5
_display["maxVisibleRows"] = int(cParms[cnt+1])
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
except Exception as err:
errormsg("Invalid MAXGRID value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'RUNTIME':
if cnt+1 < len(cParms):
try:
_settings["runtime"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid RUNTIME value provided.")
pass
cnt = cnt + 1
else:
errormsg("No value provided for the RUNTIME option.")
return
elif cParms[cnt].upper() == 'DISPLAY':
if cnt+1 < len(cParms):
if (cParms[cnt+1].upper() == 'GRID'):
_settings["display"] = 'GRID'
elif (cParms[cnt+1].upper() == 'PANDAS'):
_settings["display"] = 'PANDAS'
else:
errormsg("Invalid DISPLAY value provided.")
cnt = cnt + 1
else:
errormsg("No value provided for the DISPLAY option.")
return
elif (cParms[cnt].upper() == 'LIST'):
print("(MAXROWS) Maximum number of rows displayed: " + str(_settings["maxrows"]))
print("(MAXGRID) Maximum grid display size: " + str(_settings["maxgrid"]))
print("(RUNTIME) How many seconds to a run a statement for performance testing: " + str(_settings["runtime"]))
print("(DISPLAY) Use PANDAS or GRID display format for output: " + _settings["display"])
return
else:
cnt = cnt + 1
save_settings()
###Output
_____no_output_____
###Markdown
SQL HelpThe calling format of this routine is:```sqlhelp()```This code displays help related to the %sql magic command. This help is displayed when you issue a %sql or %%sql command by itself, or use the %sql -h flag.
###Code
def sqlhelp():
global _environment
if (_environment["jupyter"] == True):
sd = '<td style="text-align:left;">'
ed1 = '</td>'
ed2 = '</td>'
sh = '<th style="text-align:left;">'
eh1 = '</th>'
eh2 = '</th>'
sr = '<tr>'
er = '</tr>'
helpSQL = """
<h3>SQL Options</h3>
<p>The following options are available as part of a SQL statement. The options are always preceded with a
minus sign (i.e. -q).
<table>
{sr}
{sh}Option{eh1}{sh}Description{eh2}
{er}
{sr}
{sd}a, all{ed1}{sd}Return all rows in answer set and do not limit display{ed2}
{er}
{sr}
{sd}d{ed1}{sd}Change SQL delimiter to "@" from ";"{ed2}
{er}
{sr}
{sd}e, echo{ed1}{sd}Echo the SQL command that was generated after macro and variable substituion.{ed2}
{er}
{sr}
{sd}h, help{ed1}{sd}Display %sql help information.{ed2}
{er}
{sr}
{sd}j{ed1}{sd}Create a pretty JSON representation. Only the first column is formatted{ed2}
{er}
{sr}
{sd}json{ed1}{sd}Retrieve the result set as a JSON record{ed2}
{er}
{sr}
{sd}pb, bar{ed1}{sd}Plot the results as a bar chart{ed2}
{er}
{sr}
{sd}pl, line{ed1}{sd}Plot the results as a line chart{ed2}
{er}
{sr}
{sd}pp, pie{ed1}{sd}Plot Pie: Plot the results as a pie chart{ed2}
{er}
{sr}
{sd}q, quiet{ed1}{sd}Quiet results - no answer set or messages returned from the function{ed2}
{er}
{sr}
{sd}r, array{ed1}{sd}Return the result set as an array of values{ed2}
{er}
{sr}
{sd}sampledata{ed1}{sd}Create and load the EMPLOYEE and DEPARTMENT tables{ed2}
{er}
{sr}
{sd}t,time{ed1}{sd}Time the following SQL statement and return the number of times it executes in 1 second{ed2}
{er}
{sr}
{sd}grid{ed1}{sd}Display the results in a scrollable grid{ed2}
{er}
</table>
"""
else:
helpSQL = """
SQL Options
The following options are available as part of a SQL statement. Options are always
preceded with a minus sign (i.e. -q).
Option Description
a, all Return all rows in answer set and do not limit display
d Change SQL delimiter to "@" from ";"
e, echo Echo the SQL command that was generated after substitution
h, help Display %sql help information
j Create a pretty JSON representation. Only the first column is formatted
json Retrieve the result set as a JSON record
pb, bar Plot the results as a bar chart
pl, line Plot the results as a line chart
pp, pie Plot Pie: Plot the results as a pie chart
q, quiet Quiet results - no answer set or messages returned from the function
r, array Return the result set as an array of values
sampledata Create and load the EMPLOYEE and DEPARTMENT tables
t,time Time the SQL statement and return the execution count per second
grid Display the results in a scrollable grid
"""
helpSQL = helpSQL.format(**locals())
if (_environment["jupyter"] == True):
pdisplay(pHTML(helpSQL))
else:
print(helpSQL)
###Output
_____no_output_____
###Markdown
Connection HelpThe calling format of this routine is:```connected_help()```This code displays help related to the CONNECT command. This code is displayed when you issue a %sql CONNECT command with no arguments or you are running a SQL statement and there isn't any connection to a database yet.
###Code
def connected_help():
sd = '<td style="text-align:left;">'
ed = '</td>'
sh = '<th style="text-align:left;">'
eh = '</th>'
sr = '<tr>'
er = '</tr>'
if (_environment['jupyter'] == True):
helpConnect = """
<h3>Connecting to Db2</h3>
<p>The CONNECT command has the following format:
<p>
<pre>
%sql CONNECT TO <database> USER <userid> USING <password|?> HOST <ip address> PORT <port number> <SSL>
%sql CONNECT CREDENTIALS <varname>
%sql CONNECT CLOSE
%sql CONNECT RESET
%sql CONNECT PROMPT - use this to be prompted for values
</pre>
<p>
If you use a "?" for the password field, the system will prompt you for a password. This avoids typing the
password as clear text on the screen. If a connection is not successful, the system will print the error
message associated with the connect request.
<p>
The <b>CREDENTIALS</b> option allows you to use credentials that are supplied by Db2 on Cloud instances.
The credentials can be supplied as a variable and if successful, the variable will be saved to disk
for future use. If you create another notebook and use the identical syntax, if the variable
is not defined, the contents on disk will be used as the credentials. You should assign the
credentials to a variable that represents the database (or schema) that you are communicating with.
Using familiar names makes it easier to remember the credentials when connecting.
<p>
<b>CONNECT CLOSE</b> will close the current connection, but will not reset the database parameters. This means that
if you issue the CONNECT command again, the system should be able to reconnect you to the database.
<p>
<b>CONNECT RESET</b> will close the current connection and remove any information on the connection. You will need
to issue a new CONNECT statement with all of the connection information.
<p>
If the connection is successful, the parameters are saved on your system and will be used the next time you
run an SQL statement, or when you issue the %sql CONNECT command with no parameters.
<p>If you issue CONNECT RESET, all of the current values will be deleted and you will need to
issue a new CONNECT statement.
<p>A CONNECT command without any parameters will attempt to re-connect to the previous database you
were using. If the connection could not be established, the program to prompt you for
the values. To cancel the connection attempt, enter a blank value for any of the values. The connection
panel will request the following values in order to connect to Db2:
<table>
{sr}
{sh}Setting{eh}
{sh}Description{eh}
{er}
{sr}
{sd}Database{ed}{sd}Database name you want to connect to.{ed}
{er}
{sr}
{sd}Hostname{ed}
{sd}Use localhost if Db2 is running on your own machine, but this can be an IP address or host name.
{er}
{sr}
{sd}PORT{ed}
{sd}The port to use for connecting to Db2. This is usually 50000.{ed}
{er}
{sr}
{sd}SSL{ed}
{sd}If you are connecting to a secure port (50001) with SSL then you must include this keyword in the connect string.{ed}
{sr}
{sd}Userid{ed}
{sd}The userid to use when connecting (usually DB2INST1){ed}
{er}
{sr}
{sd}Password{ed}
{sd}No password is provided so you have to enter a value{ed}
{er}
</table>
"""
else:
helpConnect = """\
Connecting to Db2
The CONNECT command has the following format:
%sql CONNECT TO database USER userid USING password | ?
HOST ip address PORT port number SSL
%sql CONNECT CREDENTIALS varname
%sql CONNECT CLOSE
%sql CONNECT RESET
If you use a "?" for the password field, the system will prompt you for a password.
This avoids typing the password as clear text on the screen. If a connection is
not successful, the system will print the error message associated with the connect
request.
The CREDENTIALS option allows you to use credentials that are supplied by Db2 on
Cloud instances. The credentials can be supplied as a variable and if successful,
the variable will be saved to disk for future use. If you create another notebook
and use the identical syntax, if the variable is not defined, the contents on disk
will be used as the credentials. You should assign the credentials to a variable
that represents the database (or schema) that you are communicating with. Using
familiar names makes it easier to remember the credentials when connecting.
CONNECT CLOSE will close the current connection, but will not reset the database
parameters. This means that if you issue the CONNECT command again, the system
should be able to reconnect you to the database.
CONNECT RESET will close the current connection and remove any information on the
connection. You will need to issue a new CONNECT statement with all of the connection
information.
If the connection is successful, the parameters are saved on your system and will be
used the next time you run an SQL statement, or when you issue the %sql CONNECT
command with no parameters. If you issue CONNECT RESET, all of the current values
will be deleted and you will need to issue a new CONNECT statement.
A CONNECT command without any parameters will attempt to re-connect to the previous
database you were using. If the connection could not be established, the program to
prompt you for the values. To cancel the connection attempt, enter a blank value for
any of the values. The connection panel will request the following values in order
to connect to Db2:
Setting Description
Database Database name you want to connect to
Hostname Use localhost if Db2 is running on your own machine, but this can
be an IP address or host name.
PORT The port to use for connecting to Db2. This is usually 50000.
Userid The userid to use when connecting (usually DB2INST1)
Password No password is provided so you have to enter a value
SSL Include this keyword to indicate you are connecting via SSL (usually port 50001)
"""
helpConnect = helpConnect.format(**locals())
if (_environment['jupyter'] == True):
pdisplay(pHTML(helpConnect))
else:
print(helpConnect)
###Output
_____no_output_____
###Markdown
Prompt for Connection InformationIf you are running an SQL statement and have not yet connected to a database, the %sql command will prompt you for connection information. In order to connect to a database, you must supply:- Database name - Host name (IP address or name)- Port number- Userid- Password- Secure socketThe routine is called without any parameters:```connected_prompt()```
###Code
# Prompt for Connection information
def connected_prompt():
global _settings
_database = ''
_hostname = ''
_port = ''
_uid = ''
_pwd = ''
_ssl = ''
print("Enter the database connection details (Any empty value will cancel the connection)")
_database = input("Enter the database name: ");
if (_database.strip() == ""): return False
_hostname = input("Enter the HOST IP address or symbolic name: ");
if (_hostname.strip() == ""): return False
_port = input("Enter the PORT number: ");
if (_port.strip() == ""): return False
_ssl = input("Is this a secure (SSL) port (y or n)");
if (_ssl.strip() == ""): return False
if (_ssl == "n"):
_ssl = ""
else:
_ssl = "Security=SSL;"
_uid = input("Enter Userid on the DB2 system: ").upper();
if (_uid.strip() == ""): return False
_pwd = getpass.getpass("Password [password]: ");
if (_pwd.strip() == ""): return False
_settings["database"] = _database.strip()
_settings["hostname"] = _hostname.strip()
_settings["port"] = _port.strip()
_settings["uid"] = _uid.strip()
_settings["pwd"] = _pwd.strip()
_settings["ssl"] = _ssl.strip()
_settings["maxrows"] = 10
_settings["maxgrid"] = 5
_settings["runtime"] = 1
return True
# Split port and IP addresses
def split_string(in_port,splitter=":"):
# Split input into an IP address and Port number
global _settings
checkports = in_port.split(splitter)
ip = checkports[0]
if (len(checkports) > 1):
port = checkports[1]
else:
port = None
return ip, port
###Output
_____no_output_____
###Markdown
Connect Syntax ParserThe parseConnect routine is used to parse the CONNECT command that the user issued within the %sql command. The format of the command is:```parseConnect(inSQL)```The inSQL string contains the CONNECT keyword with some additional parameters. The format of the CONNECT command is one of:```CONNECT RESETCONNECT CLOSECONNECT CREDENTIALS CONNECT TO database USER userid USING password HOST hostname PORT portnumber ```If you have credentials available from Db2 on Cloud, place the contents of the credentials into a variable and then use the `CONNECT CREDENTIALS ` syntax to connect to the database.In addition, supplying a question mark (?) for password will result in the program prompting you for the password rather than having it as clear text in your scripts.When all of the information is checked in the command, the db2_doConnect function is called to actually do the connection to the database.
###Code
# Parse the CONNECT statement and execute if possible
def parseConnect(inSQL,local_ns):
global _settings, _connected
_connected = False
cParms = inSQL.split()
cnt = 0
_settings["ssl"] = ""
while cnt < len(cParms):
if cParms[cnt].upper() == 'TO':
if cnt+1 < len(cParms):
_settings["database"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No database specified in the CONNECT statement")
return
elif cParms[cnt].upper() == "SSL":
_settings["ssl"] = "Security=SSL;"
cnt = cnt + 1
elif cParms[cnt].upper() == 'CREDENTIALS':
if cnt+1 < len(cParms):
credentials = cParms[cnt+1]
tempid = eval(credentials,local_ns)
if (isinstance(tempid,dict) == False):
errormsg("The CREDENTIALS variable (" + credentials + ") does not contain a valid Python dictionary (JSON object)")
return
if (tempid == None):
fname = credentials + ".pickle"
try:
with open(fname,'rb') as f:
_id = pickle.load(f)
except:
errormsg("Unable to find credential variable or file.")
return
else:
_id = tempid
try:
_settings["database"] = _id["db"]
_settings["hostname"] = _id["hostname"]
_settings["port"] = _id["port"]
_settings["uid"] = _id["username"]
_settings["pwd"] = _id["password"]
try:
fname = credentials + ".pickle"
with open(fname,'wb') as f:
pickle.dump(_id,f)
except:
errormsg("Failed trying to write Db2 Credentials.")
return
except:
errormsg("Credentials file is missing information. db/hostname/port/username/password required.")
return
else:
errormsg("No Credentials name supplied")
return
cnt = cnt + 1
elif cParms[cnt].upper() == 'USER':
if cnt+1 < len(cParms):
_settings["uid"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No userid specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'USING':
if cnt+1 < len(cParms):
_settings["pwd"] = cParms[cnt+1]
if (_settings["pwd"] == '?'):
_settings["pwd"] = getpass.getpass("Password [password]: ") or "password"
cnt = cnt + 1
else:
errormsg("No password specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'HOST':
if cnt+1 < len(cParms):
hostport = cParms[cnt+1].upper()
ip, port = split_string(hostport)
if (port == None): _settings["port"] = "50000"
_settings["hostname"] = ip
cnt = cnt + 1
else:
errormsg("No hostname specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PORT':
if cnt+1 < len(cParms):
_settings["port"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No port specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PROMPT':
if (connected_prompt() == False):
print("Connection canceled.")
return
else:
cnt = cnt + 1
elif cParms[cnt].upper() in ('CLOSE','RESET') :
try:
result = ibm_db.close(_hdbc)
_hdbi.close()
except:
pass
success("Connection closed.")
if cParms[cnt].upper() == 'RESET':
_settings["database"] = ''
return
else:
cnt = cnt + 1
_ = db2_doConnect()
###Output
_____no_output_____
###Markdown
Connect to Db2The db2_doConnect routine is called when a connection needs to be established to a Db2 database. The command does not require any parameters since it relies on the settings variable which contains all of the information it needs to connect to a Db2 database.```db2_doConnect()```There are 4 additional variables that are used throughout the routines to stay connected with the Db2 database. These variables are:- hdbc - The connection handle to the database- hstmt - A statement handle used for executing SQL statements- connected - A flag that tells the program whether or not we are currently connected to a database- runtime - Used to tell %sql the length of time (default 1 second) to run a statement when timing itThe only database driver that is used in this program is the IBM DB2 ODBC DRIVER. This driver needs to be loaded on the system that is connecting to Db2. The Jupyter notebook that is built by this system installs the driver for you so you shouldn't have to do anything other than build the container.If the connection is successful, the connected flag is set to True. Any subsequent %sql call will check to see if you are connected and initiate another prompted connection if you do not have a connection to a database.
###Code
def db2_doConnect():
global _hdbc, _hdbi, _connected, _runtime
global _settings
if _connected == False:
if len(_settings["database"]) == 0:
return False
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;"
"UID={3};"
"PWD={4};{5}").format(_settings["database"],
_settings["hostname"],
_settings["port"],
_settings["uid"],
_settings["pwd"],
_settings["ssl"])
# Get a database handle (hdbc) and a statement handle (hstmt) for subsequent access to DB2
try:
_hdbc = ibm_db.connect(dsn, "", "")
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
try:
_hdbi = ibm_db_dbi.Connection(_hdbc)
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
_connected = True
# Save the values for future use
save_settings()
success("Connection successful.")
return True
###Output
_____no_output_____
###Markdown
Load/Save SettingsThere are two routines that load and save settings between Jupyter notebooks. These routines are called without any parameters.```load_settings() save_settings()```There is a global structure called settings which contains the following fields:```_settings = { "maxrows" : 10, "maxgrid" : 5, "runtime" : 1, "display" : "TEXT", "database" : "", "hostname" : "localhost", "port" : "50000", "protocol" : "TCPIP", "uid" : "DB2INST1", "pwd" : "password"}```The information in the settings structure is used for re-connecting to a database when you start up a Jupyter notebook. When the session is established for the first time, the load_settings() function is called to get the contents of the pickle file (db2connect.pickle, a Jupyter session file) that will be used for the first connection to the database. Whenever a new connection is made, the file is updated with the save_settings() function.
###Code
def load_settings():
# This routine will load the settings from the previous session if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'rb') as f:
_settings = pickle.load(f)
# Reset runtime to 1 since it would be unexpected to keep the same value between connections
_settings["runtime"] = 1
_settings["maxgrid"] = 5
except:
pass
return
def save_settings():
# This routine will save the current settings if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'wb') as f:
pickle.dump(_settings,f)
except:
errormsg("Failed trying to write Db2 Configuration Information.")
return
###Output
_____no_output_____
###Markdown
Error and Message FunctionsThere are three types of messages that are thrown by the %db2 magic command. The first routine will print out a success message with no special formatting:```success(message)```The second message is used for displaying an error message that is not associated with a SQL error. This type of error message is surrounded with a red box to highlight the problem. Note that the success message has code that has been commented out that could also show a successful return code with a green box. ```errormsg(message)```The final error message is based on an error occuring in the SQL code that was executed. This code will parse the message returned from the ibm_db interface and parse it to return only the error message portion (and not all of the wrapper code from the driver).```db2_error(quiet,connect=False)```The quiet flag is passed to the db2_error routine so that messages can be suppressed if the user wishes to ignore them with the -q flag. A good example of this is dropping a table that does not exist. We know that an error will be thrown so we can ignore it. The information that the db2_error routine gets is from the stmt_errormsg() function from within the ibm_db driver. The db2_error function should only be called after a SQL failure otherwise there will be no diagnostic information returned from stmt_errormsg().If the connect flag is True, the routine will get the SQLSTATE and SQLCODE from the connection error message rather than a statement error message.
###Code
def db2_error(quiet,connect=False):
global sqlerror, sqlcode, sqlstate, _environment
try:
if (connect == False):
errmsg = ibm_db.stmt_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
else:
errmsg = ibm_db.conn_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
sqlerror = errmsg
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
except:
errmsg = "Unknown error."
sqlcode = -99999
sqlstate = "-99999"
sqlerror = errmsg
return
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
if quiet == True: return
if (errmsg == ""): return
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html+errmsg+"</p>"))
else:
print(errmsg)
# Print out an error message
def errormsg(message):
global _environment
if (message != ""):
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + message + "</p>"))
else:
print(message)
def success(message):
if (message != ""):
print(message)
return
def debug(message,error=False):
global _environment
if (_environment["jupyter"] == True):
spacer = "<br>" + " "
else:
spacer = "\n "
if (message != ""):
lines = message.split('\n')
msg = ""
indent = 0
for line in lines:
delta = line.count("(") - line.count(")")
if (msg == ""):
msg = line
indent = indent + delta
else:
if (delta < 0): indent = indent + delta
msg = msg + spacer * (indent*2) + line
if (delta > 0): indent = indent + delta
if (indent < 0): indent = 0
if (error == True):
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
else:
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#008000; background-color:#e6ffe6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + msg + "</pre></p>"))
else:
print(msg)
return
###Output
_____no_output_____
###Markdown
Macro ProcessorA macro is used to generate SQL to be executed by overriding or creating a new keyword. For instance, the base `%sql` command does not understand the `LIST TABLES` command which is usually used in conjunction with the `CLP` processor. Rather than specifically code this in the base `db2.ipynb` file, we can create a macro that can execute this code for us.There are three routines that deal with macros. - checkMacro is used to find the macro calls in a string. All macros are sent to parseMacro for checking.- runMacro will evaluate the macro and return the string to the parse- subvars is used to track the variables used as part of a macro call.- setMacro is used to catalog a macro Set MacroThis code will catalog a macro call.
###Code
def setMacro(inSQL,parms):
global _macros
names = parms.split()
if (len(names) < 2):
errormsg("No command name supplied.")
return None
macroName = names[1].upper()
_macros[macroName] = inSQL
return
###Output
_____no_output_____
###Markdown
Check MacroThis code will check to see if there is a macro command in the SQL. It will take the SQL that is supplied and strip out three values: the first and second keywords, and the remainder of the parameters.For instance, consider the following statement:```CREATE DATABASE GEORGE options....```The name of the macro that we want to run is called `CREATE`. We know that there is a SQL command called `CREATE` but this code will call the macro first to see if needs to run any special code. For instance, `CREATE DATABASE` is not part of the `db2.ipynb` syntax, but we can add it in by using a macro.The check macro logic will strip out the subcommand (`DATABASE`) and place the remainder of the string after `DATABASE` in options.
###Code
def checkMacro(in_sql):
global _macros
if (len(in_sql) == 0): return(in_sql) # Nothing to do
tokens = parseArgs(in_sql,None) # Take the string and reduce into tokens
macro_name = tokens[0].upper() # Uppercase the name of the token
if (macro_name not in _macros):
return(in_sql) # No macro by this name so just return the string
result = runMacro(_macros[macro_name],in_sql,tokens) # Execute the macro using the tokens we found
return(result) # Runmacro will either return the original SQL or the new one
###Output
_____no_output_____
###Markdown
Split AssignmentThis routine will return the name of a variable and it's value when the format is x=y. If y is enclosed in quotes, the quotes are removed.
###Code
def splitassign(arg):
var_name = "null"
var_value = "null"
arg = arg.strip()
eq = arg.find("=")
if (eq != -1):
var_name = arg[:eq].strip()
temp_value = arg[eq+1:].strip()
if (temp_value != ""):
ch = temp_value[0]
if (ch in ["'",'"']):
if (temp_value[-1:] == ch):
var_value = temp_value[1:-1]
else:
var_value = temp_value
else:
var_value = temp_value
else:
var_value = arg
return var_name, var_value
###Output
_____no_output_____
###Markdown
Parse Args The commands that are used in the macros need to be parsed into their separate tokens. The tokens are separated by blanks and strings that enclosed in quotes are kept together.
###Code
def parseArgs(argin,_vars):
quoteChar = ""
inQuote = False
inArg = True
args = []
arg = ''
for ch in argin.lstrip():
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
arg = arg + ch #z
else:
arg = arg + ch
elif (ch == "\"" or ch == "\'"): # Do we have a quote
quoteChar = ch
arg = arg + ch #z
inQuote = True
elif (ch == " "):
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
else:
args.append("null")
arg = ""
else:
arg = arg + ch
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
return(args)
###Output
_____no_output_____
###Markdown
Run MacroThis code will execute the body of the macro and return the results for that macro call.
###Code
def runMacro(script,in_sql,tokens):
result = ""
runIT = True
code = script.split("\n")
level = 0
runlevel = [True,False,False,False,False,False,False,False,False,False]
ifcount = 0
_vars = {}
for i in range(0,len(tokens)):
vstr = str(i)
_vars[vstr] = tokens[i]
if (len(tokens) == 0):
_vars["argc"] = "0"
else:
_vars["argc"] = str(len(tokens)-1)
for line in code:
line = line.strip()
if (line == "" or line == "\n"): continue
if (line[0] == "#"): continue # A comment line starts with a # in the first position of the line
args = parseArgs(line,_vars) # Get all of the arguments
if (args[0] == "if"):
ifcount = ifcount + 1
if (runlevel[level] == False): # You can't execute this statement
continue
level = level + 1
if (len(args) < 4):
print("Macro: Incorrect number of arguments for the if clause.")
return insql
arg1 = args[1]
arg2 = args[3]
if (len(arg2) > 2):
ch1 = arg2[0]
ch2 = arg2[-1:]
if (ch1 in ['"',"'"] and ch1 == ch2):
arg2 = arg2[1:-1].strip()
op = args[2]
if (op in ["=","=="]):
if (arg1 == arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<=","=<"]):
if (arg1 <= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">=","=>"]):
if (arg1 >= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<>","!="]):
if (arg1 != arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<"]):
if (arg1 < arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">"]):
if (arg1 > arg2):
runlevel[level] = True
else:
runlevel[level] = False
else:
print("Macro: Unknown comparison operator in the if statement:" + op)
continue
elif (args[0] in ["exit","echo"] and runlevel[level] == True):
msg = ""
for msgline in args[1:]:
if (msg == ""):
msg = subvars(msgline,_vars)
else:
msg = msg + " " + subvars(msgline,_vars)
if (msg != ""):
if (args[0] == "echo"):
debug(msg,error=False)
else:
debug(msg,error=True)
if (args[0] == "exit"): return ''
elif (args[0] == "pass" and runlevel[level] == True):
pass
elif (args[0] == "var" and runlevel[level] == True):
value = ""
for val in args[2:]:
if (value == ""):
value = subvars(val,_vars)
else:
value = value + " " + subvars(val,_vars)
value.strip()
_vars[args[1]] = value
elif (args[0] == 'else'):
if (ifcount == level):
runlevel[level] = not runlevel[level]
elif (args[0] == 'return' and runlevel[level] == True):
return(result)
elif (args[0] == "endif"):
ifcount = ifcount - 1
if (ifcount < level):
level = level - 1
if (level < 0):
print("Macro: Unmatched if/endif pairs.")
return ''
else:
if (runlevel[level] == True):
if (result == ""):
result = subvars(line,_vars)
else:
result = result + "\n" + subvars(line,_vars)
return(result)
###Output
_____no_output_____
###Markdown
Substitute VarsThis routine is used by the runMacro program to track variables that are used within Macros. These are kept separate from the rest of the code.
###Code
def subvars(script,_vars):
if (_vars == None): return script
remainder = script
result = ""
done = False
while done == False:
bv = remainder.find("{")
if (bv == -1):
done = True
continue
ev = remainder.find("}")
if (ev == -1):
done = True
continue
result = result + remainder[:bv]
vvar = remainder[bv+1:ev]
remainder = remainder[ev+1:]
upper = False
allvars = False
if (vvar[0] == "^"):
upper = True
vvar = vvar[1:]
elif (vvar[0] == "*"):
vvar = vvar[1:]
allvars = True
else:
pass
if (vvar in _vars):
if (upper == True):
items = _vars[vvar].upper()
elif (allvars == True):
try:
iVar = int(vvar)
except:
return(script)
items = ""
sVar = str(iVar)
while sVar in _vars:
if (items == ""):
items = _vars[sVar]
else:
items = items + " " + _vars[sVar]
iVar = iVar + 1
sVar = str(iVar)
else:
items = _vars[vvar]
else:
if (allvars == True):
items = ""
else:
items = "null"
result = result + items
if (remainder != ""):
result = result + remainder
return(result)
###Output
_____no_output_____
###Markdown
SQL TimerThe calling format of this routine is:```count = sqlTimer(hdbc, runtime, inSQL)```This code runs the SQL string multiple times for one second (by default). The accuracy of the clock is not that great when you are running just one statement, so instead this routine will run the code multiple times for a second to give you an execution count. If you need to run the code for more than one second, the runtime value needs to be set to the number of seconds you want the code to run.The return result is always the number of times that the code executed. Note, that the program will skip reading the data if it is a SELECT statement so it doesn't included fetch time for the answer set.
###Code
def sqlTimer(hdbc, runtime, inSQL):
count = 0
t_end = time.time() + runtime
while time.time() < t_end:
try:
stmt = ibm_db.exec_immediate(hdbc,inSQL)
if (stmt == False):
db2_error(flag(["-q","-quiet"]))
return(-1)
ibm_db.free_result(stmt)
except Exception as err:
db2_error(False)
return(-1)
count = count + 1
return(count)
###Output
_____no_output_____
###Markdown
Split ArgsThis routine takes as an argument a string and then splits the arguments according to the following logic:* If the string starts with a `(` character, it will check the last character in the string and see if it is a `)` and then remove those characters* Every parameter is separated by a comma `,` and commas within quotes are ignored* Each parameter returned will have three values returned - one for the value itself, an indicator which will be either True if it was quoted, or False if not, and True or False if it is numeric.Example:``` "abcdef",abcdef,456,"856"```Three values would be returned:```[abcdef,True,False],[abcdef,False,False],[456,False,True],[856,True,False]```Any quoted string will be False for numeric. The way that the parameters are handled are up to the calling program. However, in the case of Db2, the quoted strings must be in single quotes so any quoted parameter using the double quotes `"` must be wrapped with single quotes. There is always a possibility that a string contains single quotes (i.e. O'Connor) so any substituted text should use `''` so that Db2 can properly interpret the string. This routine does not adjust the strings with quotes, and depends on the variable subtitution routine to do that.
###Code
def splitargs(arguments):
import types
# String the string and remove the ( and ) characters if they at the beginning and end of the string
results = []
step1 = arguments.strip()
if (len(step1) == 0): return(results) # Not much to do here - no args found
if (step1[0] == '('):
if (step1[-1:] == ')'):
step2 = step1[1:-1]
step2 = step2.strip()
else:
step2 = step1
else:
step2 = step1
# Now we have a string without brackets. Start scanning for commas
quoteCH = ""
pos = 0
arg = ""
args = []
while pos < len(step2):
ch = step2[pos]
if (quoteCH == ""): # Are we in a quote?
if (ch in ('"',"'")): # Check to see if we are starting a quote
quoteCH = ch
arg = arg + ch
pos += 1
elif (ch == ","): # Are we at the end of a parameter?
arg = arg.strip()
args.append(arg)
arg = ""
inarg = False
pos += 1
else: # Continue collecting the string
arg = arg + ch
pos += 1
else:
if (ch == quoteCH): # Are we at the end of a quote?
arg = arg + ch # Add the quote to the string
pos += 1 # Increment past the quote
quoteCH = "" # Stop quote checking (maybe!)
else:
pos += 1
arg = arg + ch
if (quoteCH != ""): # So we didn't end our string
arg = arg.strip()
args.append(arg)
elif (arg != ""): # Something left over as an argument
arg = arg.strip()
args.append(arg)
else:
pass
results = []
for arg in args:
result = []
if (len(arg) > 0):
if (arg[0] in ('"',"'")):
value = arg[1:-1]
isString = True
isNumber = False
else:
isString = False
isNumber = False
try:
value = eval(arg)
if (type(value) == int):
isNumber = True
elif (isinstance(value,float) == True):
isNumber = True
else:
value = arg
except:
value = arg
else:
value = ""
isString = False
isNumber = False
result = [value,isString,isNumber]
results.append(result)
return results
###Output
_____no_output_____
###Markdown
SQL ParserThe calling format of this routine is:```sql_cmd, parameter_list, encoded_sql = sqlParser(sql_input)```This code will look at the SQL string that has been passed to it and parse it into four values:- sql_cmd: First command in the list (so this may not be the actual SQL command)- parameter_list: the values of the parameters that need to passed to the execute/pandas code- encoded_sql: SQL with the parameters removed if there are any (replaced with ? markers)
###Code
def sqlParser(sqlin,local_ns):
sql_cmd = ""
encoded_sql = sqlin
firstCommand = "(?:^\s*)([a-zA-Z]+)(?:\s+.*|$)"
findFirst = re.match(firstCommand,sqlin)
if (findFirst == None): # We did not find a match so we just return the empty string
return sql_cmd, encoded_sql
cmd = findFirst.group(1)
sql_cmd = cmd.upper()
#
# Scan the input string looking for variables in the format :var. If no : is found just return.
# Var must be alpha+number+_ to be valid
#
if (':' not in sqlin): # A quick check to see if parameters are in here, but not fool-proof!
return sql_cmd, encoded_sql
inVar = False
inQuote = ""
varName = ""
encoded_sql = ""
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
for ch in sqlin:
if (inVar == True): # We are collecting the name of a variable
if (ch.upper() in "@_ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789[]"):
varName = varName + ch
continue
else:
if (varName == ""):
encode_sql = encoded_sql + ":"
elif (varName[0] in ('[',']')):
encoded_sql = encoded_sql + ":" + varName
else:
if (ch == '.'): # If the variable name is stopped by a period, assume no quotes are used
flag_quotes = False
else:
flag_quotes = True
varValue, varType = getContents(varName,flag_quotes,local_ns)
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == RAW):
encoded_sql = encoded_sql + varValue
elif (varType == LIST):
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
flag_quotes = True
try:
if (v.find('0x') == 0): # Just guessing this is a hex value at beginning
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
encoded_sql = encoded_sql + ch
varName = ""
inVar = False
elif (inQuote != ""):
encoded_sql = encoded_sql + ch
if (ch == inQuote): inQuote = ""
elif (ch in ("'",'"')):
encoded_sql = encoded_sql + ch
inQuote = ch
elif (ch == ":"): # This might be a variable
varName = ""
inVar = True
else:
encoded_sql = encoded_sql + ch
if (inVar == True):
varValue, varType = getContents(varName,True,local_ns) # We assume the end of a line is quoted
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == LIST):
flag_quotes = True
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
try:
if (v.find('0x') == 0): # Just guessing this is a hex value
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
return sql_cmd, encoded_sql
###Output
_____no_output_____
###Markdown
Variable Contents FunctionThe calling format of this routine is:```value = getContents(varName,quote,name_space)```This code will take the name of a variable as input and return the contents of that variable. If the variable is not found then the program will return None which is the equivalent to empty or null. Note that this function looks at the global variable pool for Python so it is possible that the wrong version of variable is returned if it is used in different functions. For this reason, any variables used in SQL statements should use a unique namimg convention if possible.The other thing that this function does is replace single quotes with two quotes. The reason for doing this is that Db2 will convert two single quotes into one quote when dealing with strings. This avoids problems when dealing with text that contains multiple quotes within the string. Note that this substitution is done only for single quote characters since the double quote character is used by Db2 for naming columns that are case sensitive or contain special characters.If the quote value is True, the field will have quotes around it. The name_space is the variables currently that are registered in Python.
###Code
def getContents(varName,flag_quotes,local_ns):
#
# Get the contents of the variable name that is passed to the routine. Only simple
# variables are checked, i.e. arrays and lists are not parsed
#
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
DICT = 4
try:
value = eval(varName,None,local_ns) # globals()[varName] # eval(varName)
except:
return(None,STRING)
if (isinstance(value,dict) == True): # Check to see if this is JSON dictionary
return(addquotes(value,flag_quotes),STRING)
elif(isinstance(value,list) == True): # List - tricky
return(value,LIST)
elif (isinstance(value,int) == True): # Integer value
return(value,NUMBER)
elif (isinstance(value,float) == True): # Float value
return(value,NUMBER)
else:
try:
# The pattern needs to be in the first position (0 in Python terms)
if (value.find('0x') == 0): # Just guessing this is a hex value
return(value,RAW)
else:
return(addquotes(value,flag_quotes),STRING) # String
except:
return(addquotes(str(value),flag_quotes),RAW)
###Output
_____no_output_____
###Markdown
Add QuotesQuotes are a challenge when dealing with dictionaries and Db2. Db2 wants strings delimited with single quotes, while Dictionaries use double quotes. That wouldn't be a problems except imbedded single quotes within these dictionaries will cause things to fail. This routine attempts to double-quote the single quotes within the dicitonary.
###Code
def addquotes(inString,flag_quotes):
if (isinstance(inString,dict) == True): # Check to see if this is JSON dictionary
serialized = json.dumps(inString)
else:
serialized = inString
# Replace single quotes with '' (two quotes) and wrap everything in single quotes
if (flag_quotes == False):
return(serialized)
else:
return("'"+serialized.replace("'","''")+"'") # Convert single quotes to two single quotes
###Output
_____no_output_____
###Markdown
Create the SAMPLE Database TablesThe calling format of this routine is:```db2_create_sample(quiet)```There are a lot of examples that depend on the data within the SAMPLE database. If you are running these examples and the connection is not to the SAMPLE database, then this code will create the two (EMPLOYEE, DEPARTMENT) tables that are used by most examples. If the function finds that these tables already exist, then nothing is done. If the tables are missing then they will be created with the same data as in the SAMPLE database.The quiet flag tells the program not to print any messages when the creation of the tables is complete.
###Code
def db2_create_sample(quiet):
create_department = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='DEPARTMENT' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE DEPARTMENT(DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6),ADMRDEPT CHAR(3) NOT NULL)');
EXECUTE IMMEDIATE('INSERT INTO DEPARTMENT VALUES
(''A00'',''SPIFFY COMPUTER SERVICE DIV.'',''000010'',''A00''),
(''B01'',''PLANNING'',''000020'',''A00''),
(''C01'',''INFORMATION CENTER'',''000030'',''A00''),
(''D01'',''DEVELOPMENT CENTER'',NULL,''A00''),
(''D11'',''MANUFACTURING SYSTEMS'',''000060'',''D01''),
(''D21'',''ADMINISTRATION SYSTEMS'',''000070'',''D01''),
(''E01'',''SUPPORT SERVICES'',''000050'',''A00''),
(''E11'',''OPERATIONS'',''000090'',''E01''),
(''E21'',''SOFTWARE SUPPORT'',''000100'',''E01''),
(''F22'',''BRANCH OFFICE F2'',NULL,''E01''),
(''G22'',''BRANCH OFFICE G2'',NULL,''E01''),
(''H22'',''BRANCH OFFICE H2'',NULL,''E01''),
(''I22'',''BRANCH OFFICE I2'',NULL,''E01''),
(''J22'',''BRANCH OFFICE J2'',NULL,''E01'')');
END IF;
END"""
%sql -d -q {create_department}
create_employee = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='EMPLOYEE' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE EMPLOYEE(
EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1),
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3),
PHONENO CHAR(4),
HIREDATE DATE,
JOB CHAR(8),
EDLEVEL SMALLINT NOT NULL,
SEX CHAR(1),
BIRTHDATE DATE,
SALARY DECIMAL(9,2),
BONUS DECIMAL(9,2),
COMM DECIMAL(9,2)
)');
EXECUTE IMMEDIATE('INSERT INTO EMPLOYEE VALUES
(''000010'',''CHRISTINE'',''I'',''HAAS'' ,''A00'',''3978'',''1995-01-01'',''PRES '',18,''F'',''1963-08-24'',152750.00,1000.00,4220.00),
(''000020'',''MICHAEL'' ,''L'',''THOMPSON'' ,''B01'',''3476'',''2003-10-10'',''MANAGER '',18,''M'',''1978-02-02'',94250.00,800.00,3300.00),
(''000030'',''SALLY'' ,''A'',''KWAN'' ,''C01'',''4738'',''2005-04-05'',''MANAGER '',20,''F'',''1971-05-11'',98250.00,800.00,3060.00),
(''000050'',''JOHN'' ,''B'',''GEYER'' ,''E01'',''6789'',''1979-08-17'',''MANAGER '',16,''M'',''1955-09-15'',80175.00,800.00,3214.00),
(''000060'',''IRVING'' ,''F'',''STERN'' ,''D11'',''6423'',''2003-09-14'',''MANAGER '',16,''M'',''1975-07-07'',72250.00,500.00,2580.00),
(''000070'',''EVA'' ,''D'',''PULASKI'' ,''D21'',''7831'',''2005-09-30'',''MANAGER '',16,''F'',''2003-05-26'',96170.00,700.00,2893.00),
(''000090'',''EILEEN'' ,''W'',''HENDERSON'' ,''E11'',''5498'',''2000-08-15'',''MANAGER '',16,''F'',''1971-05-15'',89750.00,600.00,2380.00),
(''000100'',''THEODORE'' ,''Q'',''SPENSER'' ,''E21'',''0972'',''2000-06-19'',''MANAGER '',14,''M'',''1980-12-18'',86150.00,500.00,2092.00),
(''000110'',''VINCENZO'' ,''G'',''LUCCHESSI'' ,''A00'',''3490'',''1988-05-16'',''SALESREP'',19,''M'',''1959-11-05'',66500.00,900.00,3720.00),
(''000120'',''SEAN'' ,'' '',''O`CONNELL'' ,''A00'',''2167'',''1993-12-05'',''CLERK '',14,''M'',''1972-10-18'',49250.00,600.00,2340.00),
(''000130'',''DELORES'' ,''M'',''QUINTANA'' ,''C01'',''4578'',''2001-07-28'',''ANALYST '',16,''F'',''1955-09-15'',73800.00,500.00,1904.00),
(''000140'',''HEATHER'' ,''A'',''NICHOLLS'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''000150'',''BRUCE'' ,'' '',''ADAMSON'' ,''D11'',''4510'',''2002-02-12'',''DESIGNER'',16,''M'',''1977-05-17'',55280.00,500.00,2022.00),
(''000160'',''ELIZABETH'',''R'',''PIANKA'' ,''D11'',''3782'',''2006-10-11'',''DESIGNER'',17,''F'',''1980-04-12'',62250.00,400.00,1780.00),
(''000170'',''MASATOSHI'',''J'',''YOSHIMURA'' ,''D11'',''2890'',''1999-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',44680.00,500.00,1974.00),
(''000180'',''MARILYN'' ,''S'',''SCOUTTEN'' ,''D11'',''1682'',''2003-07-07'',''DESIGNER'',17,''F'',''1979-02-21'',51340.00,500.00,1707.00),
(''000190'',''JAMES'' ,''H'',''WALKER'' ,''D11'',''2986'',''2004-07-26'',''DESIGNER'',16,''M'',''1982-06-25'',50450.00,400.00,1636.00),
(''000200'',''DAVID'' ,'' '',''BROWN'' ,''D11'',''4501'',''2002-03-03'',''DESIGNER'',16,''M'',''1971-05-29'',57740.00,600.00,2217.00),
(''000210'',''WILLIAM'' ,''T'',''JONES'' ,''D11'',''0942'',''1998-04-11'',''DESIGNER'',17,''M'',''2003-02-23'',68270.00,400.00,1462.00),
(''000220'',''JENNIFER'' ,''K'',''LUTZ'' ,''D11'',''0672'',''1998-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',49840.00,600.00,2387.00),
(''000230'',''JAMES'' ,''J'',''JEFFERSON'' ,''D21'',''2094'',''1996-11-21'',''CLERK '',14,''M'',''1980-05-30'',42180.00,400.00,1774.00),
(''000240'',''SALVATORE'',''M'',''MARINO'' ,''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''2002-03-31'',48760.00,600.00,2301.00),
(''000250'',''DANIEL'' ,''S'',''SMITH'' ,''D21'',''0961'',''1999-10-30'',''CLERK '',15,''M'',''1969-11-12'',49180.00,400.00,1534.00),
(''000260'',''SYBIL'' ,''P'',''JOHNSON'' ,''D21'',''8953'',''2005-09-11'',''CLERK '',16,''F'',''1976-10-05'',47250.00,300.00,1380.00),
(''000270'',''MARIA'' ,''L'',''PEREZ'' ,''D21'',''9001'',''2006-09-30'',''CLERK '',15,''F'',''2003-05-26'',37380.00,500.00,2190.00),
(''000280'',''ETHEL'' ,''R'',''SCHNEIDER'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1976-03-28'',36250.00,500.00,2100.00),
(''000290'',''JOHN'' ,''R'',''PARKER'' ,''E11'',''4502'',''2006-05-30'',''OPERATOR'',12,''M'',''1985-07-09'',35340.00,300.00,1227.00),
(''000300'',''PHILIP'' ,''X'',''SMITH'' ,''E11'',''2095'',''2002-06-19'',''OPERATOR'',14,''M'',''1976-10-27'',37750.00,400.00,1420.00),
(''000310'',''MAUDE'' ,''F'',''SETRIGHT'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''000320'',''RAMLAL'' ,''V'',''MEHTA'' ,''E21'',''9990'',''1995-07-07'',''FIELDREP'',16,''M'',''1962-08-11'',39950.00,400.00,1596.00),
(''000330'',''WING'' ,'' '',''LEE'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''M'',''1971-07-18'',45370.00,500.00,2030.00),
(''000340'',''JASON'' ,''R'',''GOUNOT'' ,''E21'',''5698'',''1977-05-05'',''FIELDREP'',16,''M'',''1956-05-17'',43840.00,500.00,1907.00),
(''200010'',''DIAN'' ,''J'',''HEMMINGER'' ,''A00'',''3978'',''1995-01-01'',''SALESREP'',18,''F'',''1973-08-14'',46500.00,1000.00,4220.00),
(''200120'',''GREG'' ,'' '',''ORLANDO'' ,''A00'',''2167'',''2002-05-05'',''CLERK '',14,''M'',''1972-10-18'',39250.00,600.00,2340.00),
(''200140'',''KIM'' ,''N'',''NATZ'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''200170'',''KIYOSHI'' ,'' '',''YAMAMOTO'' ,''D11'',''2890'',''2005-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',64680.00,500.00,1974.00),
(''200220'',''REBA'' ,''K'',''JOHN'' ,''D11'',''0672'',''2005-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',69840.00,600.00,2387.00),
(''200240'',''ROBERT'' ,''M'',''MONTEVERDE'',''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''1984-03-31'',37760.00,600.00,2301.00),
(''200280'',''EILEEN'' ,''R'',''SCHWARTZ'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1966-03-28'',46250.00,500.00,2100.00),
(''200310'',''MICHELLE'' ,''F'',''SPRINGER'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''200330'',''HELENA'' ,'' '',''WONG'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''F'',''1971-07-18'',35370.00,500.00,2030.00),
(''200340'',''ROY'' ,''R'',''ALONZO'' ,''E21'',''5698'',''1997-07-05'',''FIELDREP'',16,''M'',''1956-05-17'',31840.00,500.00,1907.00)');
END IF;
END"""
%sql -d -q {create_employee}
if (quiet == False): success("Sample tables [EMPLOYEE, DEPARTMENT] created.")
###Output
_____no_output_____
###Markdown
Check optionThis function will return the original string with the option removed, and a flag or true or false of the value is found.```args, flag = checkOption(option_string, option, false_value, true_value)```Options are specified with a -x where x is the character that we are searching for. It may actually be more than one character long like -pb/-pi/etc... The false and true values are optional. By default these are the boolean values of T/F but for some options it could be a character string like ';' versus '@' for delimiters.
###Code
def checkOption(args_in, option, vFalse=False, vTrue=True):
args_out = args_in.strip()
found = vFalse
if (args_out != ""):
if (args_out.find(option) >= 0):
args_out = args_out.replace(option," ")
args_out = args_out.strip()
found = vTrue
return args_out, found
###Output
_____no_output_____
###Markdown
Plot DataThis function will plot the data that is returned from the answer set. The plot value determines how we display the data. 1=Bar, 2=Pie, 3=Line, 4=Interactive.```plotData(flag_plot, hdbi, sql, parms)```The hdbi is the ibm_db_sa handle that is used by pandas dataframes to run the sql. The parms contains any of the parameters required to run the query.
###Code
def plotData(hdbi, sql):
try:
df = pandas.read_sql(sql,hdbi)
except Exception as err:
db2_error(False)
return
if df.empty:
errormsg("No results returned")
return
col_count = len(df.columns)
if flag(["-pb","-bar"]): # Plot 1 = bar chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='bar');
_ = plt.plot();
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
df.plot(kind='bar',x=xlabel,y=ylabel);
_ = plt.plot();
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot.bar();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pp","-pie"]): # Plot 2 = pie chart
if (col_count in (1,2)):
if (col_count == 1):
df.index = df.index + 1
yname = df.columns.values[0]
_ = df.plot(kind='pie',y=yname);
else:
xlabel = df.columns.values[0]
xname = df[xlabel].tolist()
yname = df.columns.values[1]
_ = df.plot(kind='pie',y=yname,labels=xname);
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pl","-line"]): # Plot 3 = line chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='line');
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
_ = df.plot(kind='line',x=xlabel,y=ylabel) ;
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot();
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
else:
return
###Output
_____no_output_____
###Markdown
Find a ProcedureThis routine will check to see if a procedure exists with the SCHEMA/NAME (or just NAME if no schema is supplied) and returns the number of answer sets returned. Possible values are 0, 1 (or greater) or None. If None is returned then we can't find the procedure anywhere.
###Code
def findProc(procname):
global _hdbc, _hdbi, _connected, _runtime
# Split the procedure name into schema.procname if appropriate
upper_procname = procname.upper()
schema, proc = split_string(upper_procname,".") # Expect schema.procname
if (proc == None):
proc = schema
# Call ibm_db.procedures to see if the procedure does exist
schema = "%"
try:
stmt = ibm_db.procedures(_hdbc, None, schema, proc)
if (stmt == False): # Error executing the code
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
result = ibm_db.fetch_tuple(stmt)
resultsets = result[5]
if (resultsets >= 1): resultsets = 1
return resultsets
except Exception as err:
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
###Output
_____no_output_____
###Markdown
Parse Call ArgumentsThis code will parse a SQL call name(parm1,...) and return the name and the parameters in the call.
###Code
def parseCallArgs(macro):
quoteChar = ""
inQuote = False
inParm = False
ignore = False
name = ""
parms = []
parm = ''
sqlin = macro.replace("\n","")
sqlin.lstrip()
for ch in sqlin:
if (inParm == False):
# We hit a blank in the name, so ignore everything after the procedure name until a ( is found
if (ch == " "):
ignore == True
elif (ch == "("): # Now we have parameters to send to the stored procedure
inParm = True
else:
if (ignore == False): name = name + ch # The name of the procedure (and no blanks)
else:
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
else:
parm = parm + ch
elif (ch in ("\"","\'","[")): # Do we have a quote
if (ch == "["):
quoteChar = "]"
else:
quoteChar = ch
inQuote = True
elif (ch == ")"):
if (parm != ""):
parms.append(parm)
parm = ""
break
elif (ch == ","):
if (parm != ""):
parms.append(parm)
else:
parms.append("null")
parm = ""
else:
parm = parm + ch
if (inParm == True):
if (parm != ""):
parms.append(parm_value)
return(name,parms)
###Output
_____no_output_____
###Markdown
Get ColumnsGiven a statement handle, determine what the column names are or the data types.
###Code
def getColumns(stmt):
columns = []
types = []
colcount = 0
try:
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
while (colname != False):
columns.append(colname)
types.append(coltype)
colcount += 1
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
return columns,types
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Call a ProcedureThe CALL statement is used for execution of a stored procedure. The format of the CALL statement is:```CALL PROC_NAME(x,y,z,...)```Procedures allow for the return of answer sets (cursors) as well as changing the contents of the parameters being passed to the procedure. In this implementation, the CALL function is limited to returning one answer set (or nothing). If you want to use more complex stored procedures then you will have to use the native python libraries.
###Code
def parseCall(hdbc, inSQL, local_ns):
global _hdbc, _hdbi, _connected, _runtime, _environment
# Check to see if we are connected first
if (_connected == False): # Check if you are connected
db2_doConnect()
if _connected == False: return None
remainder = inSQL.strip()
procName, procArgs = parseCallArgs(remainder[5:]) # Assume that CALL ... is the format
resultsets = findProc(procName)
if (resultsets == None): return None
argvalues = []
if (len(procArgs) > 0): # We have arguments to consider
for arg in procArgs:
varname = arg
if (len(varname) > 0):
if (varname[0] == ":"):
checkvar = varname[1:]
varvalue = getContents(checkvar,True,local_ns)
if (varvalue == None):
errormsg("Variable " + checkvar + " is not defined.")
return None
argvalues.append(varvalue)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
else:
argvalues.append(None)
try:
if (len(procArgs) > 0):
argtuple = tuple(argvalues)
result = ibm_db.callproc(_hdbc,procName,argtuple)
stmt = result[0]
else:
result = ibm_db.callproc(_hdbc,procName)
stmt = result
if (resultsets == 1 and stmt != None):
columns, types = getColumns(stmt)
if (columns == None): return None
rows = []
rowlist = ibm_db.fetch_tuple(stmt)
while ( rowlist ) :
row = []
colcount = 0
for col in rowlist:
try:
if (types[colcount] in ["int","bigint"]):
row.append(int(col))
elif (types[colcount] in ["decimal","real"]):
row.append(float(col))
elif (types[colcount] in ["date","time","timestamp"]):
row.append(str(col))
else:
row.append(col)
except:
row.append(col)
colcount += 1
rows.append(row)
rowlist = ibm_db.fetch_tuple(stmt)
if flag(["-r","-array"]):
rows.insert(0,columns)
if len(procArgs) > 0:
allresults = []
allresults.append(rows)
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return rows
else:
df = pandas.DataFrame.from_records(rows,columns=columns)
if flag("-grid") or _settings['display'] == 'GRID':
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
return df
else:
if len(procArgs) > 0:
allresults = []
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return None
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Parse Prepare/ExecuteThe PREPARE statement is used for repeated execution of a SQL statement. The PREPARE statement has the format:```stmt = PREPARE SELECT EMPNO FROM EMPLOYEE WHERE WORKDEPT=? AND SALARY<?```The SQL statement that you want executed is placed after the PREPARE statement with the location of variables marked with ? (parameter) markers. The variable stmt contains the prepared statement that need to be passed to the EXECUTE statement. The EXECUTE statement has the format:```EXECUTE :x USING z, y, s ```The first variable (:x) is the name of the variable that you assigned the results of the prepare statement. The values after the USING clause are substituted into the prepare statement where the ? markers are found. If the values in USING clause are variable names (z, y, s), a **link** is created to these variables as part of the execute statement. If you use the variable subsitution form of variable name (:z, :y, :s), the **contents** of the variable are placed into the USING clause. Normally this would not make much of a difference except when you are dealing with binary strings or JSON strings where the quote characters may cause some problems when subsituted into the statement.
###Code
def parsePExec(hdbc, inSQL):
import ibm_db
global _stmt, _stmtID, _stmtSQL, sqlcode
cParms = inSQL.split()
parmCount = len(cParms)
if (parmCount == 0): return(None) # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "PREPARE"): # Prepare the following SQL
uSQL = inSQL.upper()
found = uSQL.find("PREPARE")
sql = inSQL[found+7:].strip()
try:
pattern = "\?\*[0-9]+"
findparm = re.search(pattern,sql)
while findparm != None:
found = findparm.group(0)
count = int(found[2:])
markers = ('?,' * count)[:-1]
sql = sql.replace(found,markers)
findparm = re.search(pattern,sql)
stmt = ibm_db.prepare(hdbc,sql) # Check error code here
if (stmt == False):
db2_error(False)
return(False)
stmttext = str(stmt).strip()
stmtID = stmttext[33:48].strip()
if (stmtID in _stmtID) == False:
_stmt.append(stmt) # Prepare and return STMT to caller
_stmtID.append(stmtID)
else:
stmtIX = _stmtID.index(stmtID)
_stmt[stmtiX] = stmt
return(stmtID)
except Exception as err:
print(err)
db2_error(False)
return(False)
if (keyword == "EXECUTE"): # Execute the prepare statement
if (parmCount < 2): return(False) # No stmtID available
stmtID = cParms[1].strip()
if (stmtID in _stmtID) == False:
errormsg("Prepared statement not found or invalid.")
return(False)
stmtIX = _stmtID.index(stmtID)
stmt = _stmt[stmtIX]
try:
if (parmCount == 2): # Only the statement handle available
result = ibm_db.execute(stmt) # Run it
elif (parmCount == 3): # Not quite enough arguments
errormsg("Missing or invalid USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
else:
using = cParms[2].upper()
if (using != "USING"): # Bad syntax again
errormsg("Missing USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
uSQL = inSQL.upper()
found = uSQL.find("USING")
parmString = inSQL[found+5:].strip()
parmset = splitargs(parmString)
if (len(parmset) == 0):
errormsg("Missing parameters after the USING clause.")
sqlcode = -99999
return(False)
parms = []
parm_count = 0
CONSTANT = 0
VARIABLE = 1
const = [0]
const_cnt = 0
for v in parmset:
parm_count = parm_count + 1
if (v[1] == True or v[2] == True): # v[1] true if string, v[2] true if num
parm_type = CONSTANT
const_cnt = const_cnt + 1
if (v[2] == True):
if (isinstance(v[0],int) == True): # Integer value
sql_type = ibm_db.SQL_INTEGER
elif (isinstance(v[0],float) == True): # Float value
sql_type = ibm_db.SQL_DOUBLE
else:
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
const.append(v[0])
else:
parm_type = VARIABLE
# See if the variable has a type associated with it varname@type
varset = v[0].split("@")
parm_name = varset[0]
parm_datatype = "char"
# Does the variable exist?
if (parm_name not in globals()):
errormsg("SQL Execute parameter " + parm_name + " not found")
sqlcode = -99999
return(false)
if (len(varset) > 1): # Type provided
parm_datatype = varset[1]
if (parm_datatype == "dec" or parm_datatype == "decimal"):
sql_type = ibm_db.SQL_DOUBLE
elif (parm_datatype == "bin" or parm_datatype == "binary"):
sql_type = ibm_db.SQL_BINARY
elif (parm_datatype == "int" or parm_datatype == "integer"):
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
try:
if (parm_type == VARIABLE):
result = ibm_db.bind_param(stmt, parm_count, globals()[parm_name], ibm_db.SQL_PARAM_INPUT, sql_type)
else:
result = ibm_db.bind_param(stmt, parm_count, const[const_cnt], ibm_db.SQL_PARAM_INPUT, sql_type)
except:
result = False
if (result == False):
errormsg("SQL Bind on variable " + parm_name + " failed.")
sqlcode = -99999
return(false)
result = ibm_db.execute(stmt) # ,tuple(parms))
if (result == False):
errormsg("SQL Execute failed.")
return(False)
if (ibm_db.num_fields(stmt) == 0): return(True) # Command successfully completed
return(fetchResults(stmt))
except Exception as err:
db2_error(False)
return(False)
return(False)
return(False)
###Output
_____no_output_____
###Markdown
Fetch Result SetThis code will take the stmt handle and then produce a result set of rows as either an array (`-r`,`-array`) or as an array of json records (`-json`).
###Code
def fetchResults(stmt):
global sqlcode
rows = []
columns, types = getColumns(stmt)
# By default we assume that the data will be an array
is_array = True
# Check what type of data we want returned - array or json
if (flag(["-r","-array"]) == False):
# See if we want it in JSON format, if not it remains as an array
if (flag("-json") == True):
is_array = False
# Set column names to lowercase for JSON records
if (is_array == False):
columns = [col.lower() for col in columns] # Convert to lowercase for each of access
# First row of an array has the column names in it
if (is_array == True):
rows.append(columns)
result = ibm_db.fetch_tuple(stmt)
rowcount = 0
while (result):
rowcount += 1
if (is_array == True):
row = []
else:
row = {}
colcount = 0
for col in result:
try:
if (types[colcount] in ["int","bigint"]):
if (is_array == True):
row.append(int(col))
else:
row[columns[colcount]] = int(col)
elif (types[colcount] in ["decimal","real"]):
if (is_array == True):
row.append(float(col))
else:
row[columns[colcount]] = float(col)
elif (types[colcount] in ["date","time","timestamp"]):
if (is_array == True):
row.append(str(col))
else:
row[columns[colcount]] = str(col)
else:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
except:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
colcount += 1
rows.append(row)
result = ibm_db.fetch_tuple(stmt)
if (rowcount == 0):
sqlcode = 100
else:
sqlcode = 0
return rows
###Output
_____no_output_____
###Markdown
Parse CommitThere are three possible COMMIT verbs that can bs used:- COMMIT [WORK] - Commit the work in progress - The WORK keyword is not checked for- ROLLBACK - Roll back the unit of work- AUTOCOMMIT ON/OFF - Are statements committed on or off?The statement is passed to this routine and then checked.
###Code
def parseCommit(sql):
global _hdbc, _hdbi, _connected, _runtime, _stmt, _stmtID, _stmtSQL
if (_connected == False): return # Nothing to do if we are not connected
cParms = sql.split()
if (len(cParms) == 0): return # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "COMMIT"): # Commit the work that was done
try:
result = ibm_db.commit (_hdbc) # Commit the connection
if (len(cParms) > 1):
keyword = cParms[1].upper()
if (keyword == "HOLD"):
return
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "ROLLBACK"): # Rollback the work that was done
try:
result = ibm_db.rollback(_hdbc) # Rollback the connection
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "AUTOCOMMIT"): # Is autocommit on or off
if (len(cParms) > 1):
op = cParms[1].upper() # Need ON or OFF value
else:
return
try:
if (op == "OFF"):
ibm_db.autocommit(_hdbc, False)
elif (op == "ON"):
ibm_db.autocommit (_hdbc, True)
return
except Exception as err:
db2_error(False)
return
return
###Output
_____no_output_____
###Markdown
Set FlagsThis code will take the input SQL block and update the global flag list. The global flag list is just a list of options that are set at the beginning of a code block. The absence of a flag means it is false. If it exists it is true.
###Code
def setFlags(inSQL):
global _flags
_flags = [] # Delete all of the current flag settings
pos = 0
end = len(inSQL)-1
inFlag = False
ignore = False
outSQL = ""
flag = ""
while (pos <= end):
ch = inSQL[pos]
if (ignore == True):
outSQL = outSQL + ch
else:
if (inFlag == True):
if (ch != " "):
flag = flag + ch
else:
_flags.append(flag)
inFlag = False
else:
if (ch == "-"):
flag = "-"
inFlag = True
elif (ch == ' '):
outSQL = outSQL + ch
else:
outSQL = outSQL + ch
ignore = True
pos += 1
if (inFlag == True):
_flags.append(flag)
return outSQL
###Output
_____no_output_____
###Markdown
Check to see if flag ExistsThis function determines whether or not a flag exists in the global flag array. Absence of a value means it is false. The parameter can be a single value, or an array of values.
###Code
def flag(inflag):
global _flags
if isinstance(inflag,list):
for x in inflag:
if (x in _flags):
return True
return False
else:
if (inflag in _flags):
return True
else:
return False
###Output
_____no_output_____
###Markdown
Generate a list of SQL lines based on a delimiterNote that this function will make sure that quotes are properly maintained so that delimiters inside of quoted strings do not cause errors.
###Code
def splitSQL(inputString, delimiter):
pos = 0
arg = ""
results = []
quoteCH = ""
inSQL = inputString.strip()
if (len(inSQL) == 0): return(results) # Not much to do here - no args found
while pos < len(inSQL):
ch = inSQL[pos]
pos += 1
if (ch in ('"',"'")): # Is this a quote characters?
arg = arg + ch # Keep appending the characters to the current arg
if (ch == quoteCH): # Is this quote character we are in
quoteCH = ""
elif (quoteCH == ""): # Create the quote
quoteCH = ch
else:
None
elif (quoteCH != ""): # Still in a quote
arg = arg + ch
elif (ch == delimiter): # Is there a delimiter?
results.append(arg)
arg = ""
else:
arg = arg + ch
if (arg != ""):
results.append(arg)
return(results)
###Output
_____no_output_____
###Markdown
Main %sql Magic DefinitionThe main %sql Magic logic is found in this section of code. This code will register the Magic command and allow Jupyter notebooks to interact with Db2 by using this extension.
###Code
@magics_class
class DB2(Magics):
@needs_local_scope
@line_cell_magic
def sql(self, line, cell=None, local_ns=None):
# Before we event get started, check to see if you have connected yet. Without a connection we
# can't do anything. You may have a connection request in the code, so if that is true, we run those,
# otherwise we connect immediately
# If your statement is not a connect, and you haven't connected, we need to do it for you
global _settings, _environment
global _hdbc, _hdbi, _connected, _runtime, sqlstate, sqlerror, sqlcode, sqlelapsed
# If you use %sql (line) we just run the SQL. If you use %%SQL the entire cell is run.
flag_cell = False
flag_output = False
sqlstate = "0"
sqlerror = ""
sqlcode = 0
sqlelapsed = 0
start_time = time.time()
end_time = time.time()
# Macros gets expanded before anything is done
SQL1 = setFlags(line.strip())
SQL1 = checkMacro(SQL1) # Update the SQL if any macros are in there
SQL2 = cell
if flag("-sampledata"): # Check if you only want sample data loaded
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
db2_create_sample(flag(["-q","-quiet"]))
return
if SQL1 == "?" or flag(["-h","-help"]): # Are you asking for help
sqlhelp()
return
if len(SQL1) == 0 and SQL2 == None: return # Nothing to do here
# Check for help
if SQL1.upper() == "? CONNECT": # Are you asking for help on CONNECT
connected_help()
return
sqlType,remainder = sqlParser(SQL1,local_ns) # What type of command do you have?
if (sqlType == "CONNECT"): # A connect request
parseConnect(SQL1,local_ns)
return
elif (sqlType == "DEFINE"): # Create a macro from the body
result = setMacro(SQL2,remainder)
return
elif (sqlType == "OPTION"):
setOptions(SQL1)
return
elif (sqlType == 'COMMIT' or sqlType == 'ROLLBACK' or sqlType == 'AUTOCOMMIT'):
parseCommit(remainder)
return
elif (sqlType == "PREPARE"):
pstmt = parsePExec(_hdbc, remainder)
return(pstmt)
elif (sqlType == "EXECUTE"):
result = parsePExec(_hdbc, remainder)
return(result)
elif (sqlType == "CALL"):
result = parseCall(_hdbc, remainder, local_ns)
return(result)
else:
pass
sql = SQL1
if (sql == ""): sql = SQL2
if (sql == ""): return # Nothing to do here
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
if _settings["maxrows"] == -1: # Set the return result size
pandas.reset_option('display.max_rows')
else:
pandas.options.display.max_rows = _settings["maxrows"]
runSQL = re.sub('.*?--.*$',"",sql,flags=re.M)
remainder = runSQL.replace("\n"," ")
if flag(["-d","-delim"]):
sqlLines = splitSQL(remainder,"@")
else:
sqlLines = splitSQL(remainder,";")
flag_cell = True
# For each line figure out if you run it as a command (db2) or select (sql)
for sqlin in sqlLines: # Run each command
sqlin = checkMacro(sqlin) # Update based on any macros
sqlType, sql = sqlParser(sqlin,local_ns) # Parse the SQL
if (sql.strip() == ""): continue
if flag(["-e","-echo"]): debug(sql,False)
if flag("-t"):
cnt = sqlTimer(_hdbc, _settings["runtime"], sql) # Given the sql and parameters, clock the time
if (cnt >= 0): print("Total iterations in %s second(s): %s" % (_settings["runtime"],cnt))
return(cnt)
elif flag(["-pb","-bar","-pp","-pie","-pl","-line"]): # We are plotting some results
plotData(_hdbi, sql) # Plot the data and return
return
else:
try: # See if we have an answer set
stmt = ibm_db.prepare(_hdbc,sql)
if (ibm_db.num_fields(stmt) == 0): # No, so we just execute the code
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
continue
rowcount = ibm_db.num_rows(stmt)
if (rowcount == 0 and flag(["-q","-quiet"]) == False):
errormsg("No rows found.")
continue # Continue running
elif flag(["-r","-array","-j","-json"]): # raw, json, format json
row_count = 0
resultSet = []
try:
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
return
if flag("-j"): # JSON single output
row_count = 0
json_results = []
while( ibm_db.fetch_row(stmt) ):
row_count = row_count + 1
jsonVal = ibm_db.result(stmt,0)
jsonDict = json.loads(jsonVal)
json_results.append(jsonDict)
flag_output = True
if (row_count == 0): sqlcode = 100
return(json_results)
else:
return(fetchResults(stmt))
except Exception as err:
db2_error(flag(["-q","-quiet"]))
return
else:
try:
df = pandas.read_sql(sql,_hdbi)
except Exception as err:
db2_error(False)
return
if (len(df) == 0):
sqlcode = 100
if (flag(["-q","-quiet"]) == False):
errormsg("No rows found")
continue
flag_output = True
if flag("-grid") or _settings['display'] == 'GRID': # Check to see if we can display the results
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
print(df.to_string())
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
pandas.options.display.max_rows = None
pandas.options.display.max_columns = None
return df # print(df.to_string())
else:
pandas.options.display.max_rows = _settings["maxrows"]
pandas.options.display.max_columns = None
return df # pdisplay(df) # print(df.to_string())
except:
db2_error(flag(["-q","-quiet"]))
continue # return
end_time = time.time()
sqlelapsed = end_time - start_time
if (flag_output == False and flag(["-q","-quiet"]) == False): print("Command completed.")
# Register the Magic extension in Jupyter
ip = get_ipython()
ip.register_magics(DB2)
load_settings()
success("Db2 Extensions Loaded.")
###Output
_____no_output_____
###Markdown
Pre-defined MacrosThese macros are used to simulate the LIST TABLES and DESCRIBE commands that are available from within the Db2 command line.
###Code
%%sql define LIST
#
# The LIST macro is used to list all of the tables in the current schema or for all schemas
#
var syntax Syntax: LIST TABLES [FOR ALL | FOR SCHEMA name]
#
# Only LIST TABLES is supported by this macro
#
if {^1} <> 'TABLES'
exit {syntax}
endif
#
# This SQL is a temporary table that contains the description of the different table types
#
WITH TYPES(TYPE,DESCRIPTION) AS (
VALUES
('A','Alias'),
('G','Created temporary table'),
('H','Hierarchy table'),
('L','Detached table'),
('N','Nickname'),
('S','Materialized query table'),
('T','Table'),
('U','Typed table'),
('V','View'),
('W','Typed view')
)
SELECT TABNAME, TABSCHEMA, T.DESCRIPTION FROM SYSCAT.TABLES S, TYPES T
WHERE T.TYPE = S.TYPE
#
# Case 1: No arguments - LIST TABLES
#
if {argc} == 1
AND OWNER = CURRENT USER
ORDER BY TABNAME, TABSCHEMA
return
endif
#
# Case 2: Need 3 arguments - LIST TABLES FOR ALL
#
if {argc} == 3
if {^2}&{^3} == 'FOR&ALL'
ORDER BY TABNAME, TABSCHEMA
return
endif
exit {syntax}
endif
#
# Case 3: Need FOR SCHEMA something here
#
if {argc} == 4
if {^2}&{^3} == 'FOR&SCHEMA'
AND TABSCHEMA = '{^4}'
ORDER BY TABNAME, TABSCHEMA
return
else
exit {syntax}
endif
endif
#
# Nothing matched - Error
#
exit {syntax}
%%sql define describe
#
# The DESCRIBE command can either use the syntax DESCRIBE TABLE <name> or DESCRIBE TABLE SELECT ...
#
var syntax Syntax: DESCRIBE [TABLE name | SELECT statement]
#
# Check to see what count of variables is... Must be at least 2 items DESCRIBE TABLE x or SELECT x
#
if {argc} < 2
exit {syntax}
endif
CALL ADMIN_CMD('{*0}');
###Output
_____no_output_____
###Markdown
Set the table formatting to left align a table in a cell. By default, tables are centered in a cell. Remove this cell if you don't want to change Jupyter notebook formatting for tables. In addition, we skip this code if you are running in a shell environment rather than a Jupyter notebook
###Code
#%%html
#<style>
# table {margin-left: 0 !important; text-align: left;}
#</style>
###Output
_____no_output_____
###Markdown
DB2 Jupyter Notebook ExtensionsVersion: 2019-11-19 This code is imported as a Jupyter notebook extension in any notebooks you create with DB2 code in it. Place the following line of code in any notebook that you want to use these commands with:&37;run db2.ipynbThis code defines a Jupyter/Python magic command called `%sql` which allows you to execute DB2 specific calls to the database. There are other packages available for manipulating databases, but this one has been specificallydesigned for demonstrating a number of the SQL features available in DB2.There are two ways of executing the `%sql` command. A single line SQL statement would use theline format of the magic command:%sql SELECT * FROM EMPLOYEEIf you have a large block of sql then you would place the %%sql command at the beginning of the block and thenplace the SQL statements into the remainder of the block. Using this form of the `%%sql` statement means that thenotebook cell can only contain SQL and no other statements.%%sqlSELECT * FROM EMPLOYEEORDER BY LASTNAMEYou can have multiple lines in the SQL block (`%%sql`). The default SQL delimiter is the semi-column (`;`).If you have scripts (triggers, procedures, functions) that use the semi-colon as part of the script, you will need to use the `-d` option to change the delimiter to an at "`@`" sign. %%sql -dSELECT * FROM EMPLOYEE@CREATE PROCEDURE ...@The `%sql` command allows most DB2 commands to execute and has a special version of the CONNECT statement. A CONNECT by itself will attempt to reconnect to the database using previously used settings. If it cannot connect, it will prompt the user for additional information. The CONNECT command has the following format:%sql CONNECT TO <database> USER <userid> USING <password | ?> HOST <ip address> PORT <port number>If you use a "`?`" for the password field, the system will prompt you for a password. This avoids typing the password as clear text on the screen. If a connection is not successful, the system will print the errormessage associated with the connect request.If the connection is successful, the parameters are saved on your system and will be used the next time yourun a SQL statement, or when you issue the %sql CONNECT command with no parameters. In addition to the -d option, there are a number different options that you can specify at the beginning of the SQL: - `-d, -delim` - Change SQL delimiter to "`@`" from "`;`" - `-q, -quiet` - Quiet results - no messages returned from the function - `-r, -array` - Return the result set as an array of values instead of a dataframe - `-t, -time` - Time the following SQL statement and return the number of times it executes in 1 second - `-j` - Format the first character column of the result set as a JSON record - `-json` - Return result set as an array of json records - `-a, -all` - Return all rows in answer set and do not limit display - `-grid` - Display the results in a scrollable grid - `-pb, -bar` - Plot the results as a bar chart - `-pl, -line` - Plot the results as a line chart - `-pp, -pie` - Plot the results as a pie chart - `-e, -echo` - Any macro expansions are displayed in an output box - `-sampledata` - Create and load the EMPLOYEE and DEPARTMENT tablesYou can pass python variables to the `%sql` command by using the `{}` braces with the name of thevariable inbetween. Note that you will need to place proper punctuation around the variable in the event theSQL command requires it. For instance, the following example will find employee '000010' in the EMPLOYEE table.empno = '000010'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO='{empno}'The other option is to use parameter markers. What you would need to do is use the name of the variable with a colon in front of it and the program will prepare the statement and then pass the variable to Db2 when the statement is executed. This allows you to create complex strings that might contain quote characters and other special characters and not have to worry about enclosing the string with the correct quotes. Note that you do not place the quotes around the variable even though it is a string.empno = '000020'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=:empno Development SQLThe previous set of `%sql` and `%%sql` commands deals with SQL statements and commands that are run in an interactive manner. There is a class of SQL commands that are more suited to a development environment where code is iterated or requires changing input. The commands that are associated with this form of SQL are:- AUTOCOMMIT- COMMIT/ROLLBACK- PREPARE - EXECUTEAutocommit is the default manner in which SQL statements are executed. At the end of the successful completion of a statement, the results are commited to the database. There is no concept of a transaction where multiple DML/DDL statements are considered one transaction. The `AUTOCOMMIT` command allows you to turn autocommit `OFF` or `ON`. This means that the set of SQL commands run after the `AUTOCOMMIT OFF` command are executed are not commited to the database until a `COMMIT` or `ROLLBACK` command is issued.`COMMIT` (`WORK`) will finalize all of the transactions (`COMMIT`) to the database and `ROLLBACK` will undo all of the changes. If you issue a `SELECT` statement during the execution of your block, the results will reflect all of your changes. If you `ROLLBACK` the transaction, the changes will be lost.`PREPARE` is typically used in a situation where you want to repeatidly execute a SQL statement with different variables without incurring the SQL compilation overhead. For instance:```x = %sql PREPARE SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=?for y in ['000010','000020','000030']: %sql execute :x using :y````EXECUTE` is used to execute a previously compiled statement. To retrieve the error codes that might be associated with any SQL call, the following variables are updated after every call:* SQLCODE* SQLSTATE* SQLERROR - Full error message retrieved from Db2 Install Db2 Python DriverIf the ibm_db driver is not installed on your system, the subsequent Db2 commands will fail. In order to install the Db2 driver, issue the following command from a Jupyter notebook cell:```!pip install --user ibm_db``` Db2 Jupyter ExtensionsThis section of code has the import statements and global variables defined for the remainder of the functions.
###Code
#
# Set up Jupyter MAGIC commands "sql".
# %sql will return results from a DB2 select statement or execute a DB2 command
#
# IBM 2019: George Baklarz
# Version 2019-10-03
#
from __future__ import print_function
from IPython.display import HTML as pHTML, Image as pImage, display as pdisplay, Javascript as Javascript
from IPython.core.magic import (Magics, magics_class, line_magic,
cell_magic, line_cell_magic, needs_local_scope)
import ibm_db
import pandas
import ibm_db_dbi
import json
import matplotlib
import matplotlib.pyplot as plt
import getpass
import os
import pickle
import time
import sys
import re
import warnings
warnings.filterwarnings("ignore")
# Python Hack for Input between 2 and 3
try:
input = raw_input
except NameError:
pass
_settings = {
"maxrows" : 10,
"maxgrid" : 5,
"runtime" : 1,
"display" : "PANDAS",
"database" : "",
"hostname" : "localhost",
"port" : "50000",
"protocol" : "TCPIP",
"uid" : "DB2INST1",
"pwd" : "password",
"ssl" : ""
}
_environment = {
"jupyter" : True,
"qgrid" : True
}
_display = {
'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': False,
'defaultColumnWidth': 150,
'rowHeight': 28,
'enableColumnReorder': False,
'enableTextSelectionOnCells': True,
'editable': False,
'autoEdit': False,
'explicitInitialization': True,
'maxVisibleRows': 5,
'minVisibleRows': 5,
'sortable': True,
'filterable': False,
'highlightSelectedCell': False,
'highlightSelectedRow': True
}
# Connection settings for statements
_connected = False
_hdbc = None
_hdbi = None
_stmt = []
_stmtID = []
_stmtSQL = []
_vars = {}
_macros = {}
_flags = []
_debug = False
# Db2 Error Messages and Codes
sqlcode = 0
sqlstate = "0"
sqlerror = ""
sqlelapsed = 0
# Check to see if QGrid is installed
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
# Check if we are running in iPython or Jupyter
try:
if (get_ipython().config == {}):
_environment['jupyter'] = False
_environment['qgrid'] = False
else:
_environment['jupyter'] = True
except:
_environment['jupyter'] = False
_environment['qgrid'] = False
###Output
_____no_output_____
###Markdown
OptionsThere are four options that can be set with the **`%sql`** command. These options are shown below with the default value shown in parenthesis.- **`MAXROWS n (10)`** - The maximum number of rows that will be displayed before summary information is shown. If the answer set is less than this number of rows, it will be completely shown on the screen. If the answer set is larger than this amount, only the first 5 rows and last 5 rows of the answer set will be displayed. If you want to display a very large answer set, you may want to consider using the grid option `-g` to display the results in a scrollable table. If you really want to show all results then setting MAXROWS to -1 will return all output.- **`MAXGRID n (5)`** - The maximum size of a grid display. When displaying a result set in a grid `-g`, the default size of the display window is 5 rows. You can set this to a larger size so that more rows are shown on the screen. Note that the minimum size always remains at 5 which means that if the system is unable to display your maximum row size it will reduce the table display until it fits.- **`DISPLAY PANDAS | GRID (PANDAS)`** - Display the results as a PANDAS dataframe (default) or as a scrollable GRID- **`RUNTIME n (1)`** - When using the timer option on a SQL statement, the statement will execute for **`n`** number of seconds. The result that is returned is the number of times the SQL statement executed rather than the execution time of the statement. The default value for runtime is one second, so if the SQL is very complex you will need to increase the run time.- **`LIST`** - Display the current settingsTo set an option use the following syntax:```%sql option option_name value option_name value ....```The following example sets all options:```%sql option maxrows 100 runtime 2 display grid maxgrid 10```The values will **not** be saved between Jupyter notebooks sessions. If you need to retrieve the current options values, use the LIST command as the only argument:```%sql option list```
###Code
def setOptions(inSQL):
global _settings, _display
cParms = inSQL.split()
cnt = 0
while cnt < len(cParms):
if cParms[cnt].upper() == 'MAXROWS':
if cnt+1 < len(cParms):
try:
_settings["maxrows"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid MAXROWS value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'MAXGRID':
if cnt+1 < len(cParms):
try:
maxgrid = int(cParms[cnt+1])
if (maxgrid <= 5): # Minimum window size is 5
maxgrid = 5
_display["maxVisibleRows"] = int(cParms[cnt+1])
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
except Exception as err:
errormsg("Invalid MAXGRID value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'RUNTIME':
if cnt+1 < len(cParms):
try:
_settings["runtime"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid RUNTIME value provided.")
pass
cnt = cnt + 1
else:
errormsg("No value provided for the RUNTIME option.")
return
elif cParms[cnt].upper() == 'DISPLAY':
if cnt+1 < len(cParms):
if (cParms[cnt+1].upper() == 'GRID'):
_settings["display"] = 'GRID'
elif (cParms[cnt+1].upper() == 'PANDAS'):
_settings["display"] = 'PANDAS'
else:
errormsg("Invalid DISPLAY value provided.")
cnt = cnt + 1
else:
errormsg("No value provided for the DISPLAY option.")
return
elif (cParms[cnt].upper() == 'LIST'):
print("(MAXROWS) Maximum number of rows displayed: " + str(_settings["maxrows"]))
print("(MAXGRID) Maximum grid display size: " + str(_settings["maxgrid"]))
print("(RUNTIME) How many seconds to a run a statement for performance testing: " + str(_settings["runtime"]))
print("(DISPLAY) Use PANDAS or GRID display format for output: " + _settings["display"])
return
else:
cnt = cnt + 1
save_settings()
###Output
_____no_output_____
###Markdown
SQL HelpThe calling format of this routine is:```sqlhelp()```This code displays help related to the %sql magic command. This help is displayed when you issue a %sql or %%sql command by itself, or use the %sql -h flag.
###Code
def sqlhelp():
global _environment
if (_environment["jupyter"] == True):
sd = '<td style="text-align:left;">'
ed1 = '</td>'
ed2 = '</td>'
sh = '<th style="text-align:left;">'
eh1 = '</th>'
eh2 = '</th>'
sr = '<tr>'
er = '</tr>'
helpSQL = """
<h3>SQL Options</h3>
<p>The following options are available as part of a SQL statement. The options are always preceded with a
minus sign (i.e. -q).
<table>
{sr}
{sh}Option{eh1}{sh}Description{eh2}
{er}
{sr}
{sd}a, all{ed1}{sd}Return all rows in answer set and do not limit display{ed2}
{er}
{sr}
{sd}d{ed1}{sd}Change SQL delimiter to "@" from ";"{ed2}
{er}
{sr}
{sd}e, echo{ed1}{sd}Echo the SQL command that was generated after macro and variable substituion.{ed2}
{er}
{sr}
{sd}h, help{ed1}{sd}Display %sql help information.{ed2}
{er}
{sr}
{sd}j{ed1}{sd}Create a pretty JSON representation. Only the first column is formatted{ed2}
{er}
{sr}
{sd}json{ed1}{sd}Retrieve the result set as a JSON record{ed2}
{er}
{sr}
{sd}pb, bar{ed1}{sd}Plot the results as a bar chart{ed2}
{er}
{sr}
{sd}pl, line{ed1}{sd}Plot the results as a line chart{ed2}
{er}
{sr}
{sd}pp, pie{ed1}{sd}Plot Pie: Plot the results as a pie chart{ed2}
{er}
{sr}
{sd}q, quiet{ed1}{sd}Quiet results - no answer set or messages returned from the function{ed2}
{er}
{sr}
{sd}r, array{ed1}{sd}Return the result set as an array of values{ed2}
{er}
{sr}
{sd}sampledata{ed1}{sd}Create and load the EMPLOYEE and DEPARTMENT tables{ed2}
{er}
{sr}
{sd}t,time{ed1}{sd}Time the following SQL statement and return the number of times it executes in 1 second{ed2}
{er}
{sr}
{sd}grid{ed1}{sd}Display the results in a scrollable grid{ed2}
{er}
</table>
"""
else:
helpSQL = """
SQL Options
The following options are available as part of a SQL statement. Options are always
preceded with a minus sign (i.e. -q).
Option Description
a, all Return all rows in answer set and do not limit display
d Change SQL delimiter to "@" from ";"
e, echo Echo the SQL command that was generated after substitution
h, help Display %sql help information
j Create a pretty JSON representation. Only the first column is formatted
json Retrieve the result set as a JSON record
pb, bar Plot the results as a bar chart
pl, line Plot the results as a line chart
pp, pie Plot Pie: Plot the results as a pie chart
q, quiet Quiet results - no answer set or messages returned from the function
r, array Return the result set as an array of values
sampledata Create and load the EMPLOYEE and DEPARTMENT tables
t,time Time the SQL statement and return the execution count per second
grid Display the results in a scrollable grid
"""
helpSQL = helpSQL.format(**locals())
if (_environment["jupyter"] == True):
pdisplay(pHTML(helpSQL))
else:
print(helpSQL)
###Output
_____no_output_____
###Markdown
Connection HelpThe calling format of this routine is:```connected_help()```This code displays help related to the CONNECT command. This code is displayed when you issue a %sql CONNECT command with no arguments or you are running a SQL statement and there isn't any connection to a database yet.
###Code
def connected_help():
sd = '<td style="text-align:left;">'
ed = '</td>'
sh = '<th style="text-align:left;">'
eh = '</th>'
sr = '<tr>'
er = '</tr>'
if (_environment['jupyter'] == True):
helpConnect = """
<h3>Connecting to Db2</h3>
<p>The CONNECT command has the following format:
<p>
<pre>
%sql CONNECT TO <database> USER <userid> USING <password|?> HOST <ip address> PORT <port number> <SSL>
%sql CONNECT CREDENTIALS <varname>
%sql CONNECT CLOSE
%sql CONNECT RESET
%sql CONNECT PROMPT - use this to be prompted for values
</pre>
<p>
If you use a "?" for the password field, the system will prompt you for a password. This avoids typing the
password as clear text on the screen. If a connection is not successful, the system will print the error
message associated with the connect request.
<p>
The <b>CREDENTIALS</b> option allows you to use credentials that are supplied by Db2 on Cloud instances.
The credentials can be supplied as a variable and if successful, the variable will be saved to disk
for future use. If you create another notebook and use the identical syntax, if the variable
is not defined, the contents on disk will be used as the credentials. You should assign the
credentials to a variable that represents the database (or schema) that you are communicating with.
Using familiar names makes it easier to remember the credentials when connecting.
<p>
<b>CONNECT CLOSE</b> will close the current connection, but will not reset the database parameters. This means that
if you issue the CONNECT command again, the system should be able to reconnect you to the database.
<p>
<b>CONNECT RESET</b> will close the current connection and remove any information on the connection. You will need
to issue a new CONNECT statement with all of the connection information.
<p>
If the connection is successful, the parameters are saved on your system and will be used the next time you
run an SQL statement, or when you issue the %sql CONNECT command with no parameters.
<p>If you issue CONNECT RESET, all of the current values will be deleted and you will need to
issue a new CONNECT statement.
<p>A CONNECT command without any parameters will attempt to re-connect to the previous database you
were using. If the connection could not be established, the program to prompt you for
the values. To cancel the connection attempt, enter a blank value for any of the values. The connection
panel will request the following values in order to connect to Db2:
<table>
{sr}
{sh}Setting{eh}
{sh}Description{eh}
{er}
{sr}
{sd}Database{ed}{sd}Database name you want to connect to.{ed}
{er}
{sr}
{sd}Hostname{ed}
{sd}Use localhost if Db2 is running on your own machine, but this can be an IP address or host name.
{er}
{sr}
{sd}PORT{ed}
{sd}The port to use for connecting to Db2. This is usually 50000.{ed}
{er}
{sr}
{sd}SSL{ed}
{sd}If you are connecting to a secure port (50001) with SSL then you must include this keyword in the connect string.{ed}
{sr}
{sd}Userid{ed}
{sd}The userid to use when connecting (usually DB2INST1){ed}
{er}
{sr}
{sd}Password{ed}
{sd}No password is provided so you have to enter a value{ed}
{er}
</table>
"""
else:
helpConnect = """\
Connecting to Db2
The CONNECT command has the following format:
%sql CONNECT TO database USER userid USING password | ?
HOST ip address PORT port number SSL
%sql CONNECT CREDENTIALS varname
%sql CONNECT CLOSE
%sql CONNECT RESET
If you use a "?" for the password field, the system will prompt you for a password.
This avoids typing the password as clear text on the screen. If a connection is
not successful, the system will print the error message associated with the connect
request.
The CREDENTIALS option allows you to use credentials that are supplied by Db2 on
Cloud instances. The credentials can be supplied as a variable and if successful,
the variable will be saved to disk for future use. If you create another notebook
and use the identical syntax, if the variable is not defined, the contents on disk
will be used as the credentials. You should assign the credentials to a variable
that represents the database (or schema) that you are communicating with. Using
familiar names makes it easier to remember the credentials when connecting.
CONNECT CLOSE will close the current connection, but will not reset the database
parameters. This means that if you issue the CONNECT command again, the system
should be able to reconnect you to the database.
CONNECT RESET will close the current connection and remove any information on the
connection. You will need to issue a new CONNECT statement with all of the connection
information.
If the connection is successful, the parameters are saved on your system and will be
used the next time you run an SQL statement, or when you issue the %sql CONNECT
command with no parameters. If you issue CONNECT RESET, all of the current values
will be deleted and you will need to issue a new CONNECT statement.
A CONNECT command without any parameters will attempt to re-connect to the previous
database you were using. If the connection could not be established, the program to
prompt you for the values. To cancel the connection attempt, enter a blank value for
any of the values. The connection panel will request the following values in order
to connect to Db2:
Setting Description
Database Database name you want to connect to
Hostname Use localhost if Db2 is running on your own machine, but this can
be an IP address or host name.
PORT The port to use for connecting to Db2. This is usually 50000.
Userid The userid to use when connecting (usually DB2INST1)
Password No password is provided so you have to enter a value
SSL Include this keyword to indicate you are connecting via SSL (usually port 50001)
"""
helpConnect = helpConnect.format(**locals())
if (_environment['jupyter'] == True):
pdisplay(pHTML(helpConnect))
else:
print(helpConnect)
###Output
_____no_output_____
###Markdown
Prompt for Connection InformationIf you are running an SQL statement and have not yet connected to a database, the %sql command will prompt you for connection information. In order to connect to a database, you must supply:- Database name - Host name (IP address or name)- Port number- Userid- Password- Secure socketThe routine is called without any parameters:```connected_prompt()```
###Code
# Prompt for Connection information
def connected_prompt():
global _settings
_database = ''
_hostname = ''
_port = ''
_uid = ''
_pwd = ''
_ssl = ''
print("Enter the database connection details (Any empty value will cancel the connection)")
_database = input("Enter the database name: ");
if (_database.strip() == ""): return False
_hostname = input("Enter the HOST IP address or symbolic name: ");
if (_hostname.strip() == ""): return False
_port = input("Enter the PORT number: ");
if (_port.strip() == ""): return False
_ssl = input("Is this a secure (SSL) port (y or n)");
if (_ssl.strip() == ""): return False
if (_ssl == "n"):
_ssl = ""
else:
_ssl = "Security=SSL;"
_uid = input("Enter Userid on the DB2 system: ").upper();
if (_uid.strip() == ""): return False
_pwd = getpass.getpass("Password [password]: ");
if (_pwd.strip() == ""): return False
_settings["database"] = _database.strip()
_settings["hostname"] = _hostname.strip()
_settings["port"] = _port.strip()
_settings["uid"] = _uid.strip()
_settings["pwd"] = _pwd.strip()
_settings["ssl"] = _ssl.strip()
_settings["maxrows"] = 10
_settings["maxgrid"] = 5
_settings["runtime"] = 1
return True
# Split port and IP addresses
def split_string(in_port,splitter=":"):
# Split input into an IP address and Port number
global _settings
checkports = in_port.split(splitter)
ip = checkports[0]
if (len(checkports) > 1):
port = checkports[1]
else:
port = None
return ip, port
###Output
_____no_output_____
###Markdown
Connect Syntax ParserThe parseConnect routine is used to parse the CONNECT command that the user issued within the %sql command. The format of the command is:```parseConnect(inSQL)```The inSQL string contains the CONNECT keyword with some additional parameters. The format of the CONNECT command is one of:```CONNECT RESETCONNECT CLOSECONNECT CREDENTIALS CONNECT TO database USER userid USING password HOST hostname PORT portnumber ```If you have credentials available from Db2 on Cloud, place the contents of the credentials into a variable and then use the `CONNECT CREDENTIALS ` syntax to connect to the database.In addition, supplying a question mark (?) for password will result in the program prompting you for the password rather than having it as clear text in your scripts.When all of the information is checked in the command, the db2_doConnect function is called to actually do the connection to the database.
###Code
# Parse the CONNECT statement and execute if possible
def parseConnect(inSQL,local_ns):
global _settings, _connected
_connected = False
cParms = inSQL.split()
cnt = 0
_settings["ssl"] = ""
while cnt < len(cParms):
if cParms[cnt].upper() == 'TO':
if cnt+1 < len(cParms):
_settings["database"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No database specified in the CONNECT statement")
return
elif cParms[cnt].upper() == "SSL":
_settings["ssl"] = "Security=SSL;"
cnt = cnt + 1
elif cParms[cnt].upper() == 'CREDENTIALS':
if cnt+1 < len(cParms):
credentials = cParms[cnt+1]
tempid = eval(credentials,local_ns)
if (isinstance(tempid,dict) == False):
errormsg("The CREDENTIALS variable (" + credentials + ") does not contain a valid Python dictionary (JSON object)")
return
if (tempid == None):
fname = credentials + ".pickle"
try:
with open(fname,'rb') as f:
_id = pickle.load(f)
except:
errormsg("Unable to find credential variable or file.")
return
else:
_id = tempid
try:
_settings["database"] = _id["db"]
_settings["hostname"] = _id["hostname"]
_settings["port"] = _id["port"]
_settings["uid"] = _id["username"]
_settings["pwd"] = _id["password"]
try:
fname = credentials + ".pickle"
with open(fname,'wb') as f:
pickle.dump(_id,f)
except:
errormsg("Failed trying to write Db2 Credentials.")
return
except:
errormsg("Credentials file is missing information. db/hostname/port/username/password required.")
return
else:
errormsg("No Credentials name supplied")
return
cnt = cnt + 1
elif cParms[cnt].upper() == 'USER':
if cnt+1 < len(cParms):
_settings["uid"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No userid specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'USING':
if cnt+1 < len(cParms):
_settings["pwd"] = cParms[cnt+1]
if (_settings["pwd"] == '?'):
_settings["pwd"] = getpass.getpass("Password [password]: ") or "password"
cnt = cnt + 1
else:
errormsg("No password specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'HOST':
if cnt+1 < len(cParms):
hostport = cParms[cnt+1].upper()
ip, port = split_string(hostport)
if (port == None): _settings["port"] = "50000"
_settings["hostname"] = ip
cnt = cnt + 1
else:
errormsg("No hostname specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PORT':
if cnt+1 < len(cParms):
_settings["port"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No port specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PROMPT':
if (connected_prompt() == False):
print("Connection canceled.")
return
else:
cnt = cnt + 1
elif cParms[cnt].upper() in ('CLOSE','RESET') :
try:
result = ibm_db.close(_hdbc)
_hdbi.close()
except:
pass
success("Connection closed.")
if cParms[cnt].upper() == 'RESET':
_settings["database"] = ''
return
else:
cnt = cnt + 1
_ = db2_doConnect()
###Output
_____no_output_____
###Markdown
Connect to Db2The db2_doConnect routine is called when a connection needs to be established to a Db2 database. The command does not require any parameters since it relies on the settings variable which contains all of the information it needs to connect to a Db2 database.```db2_doConnect()```There are 4 additional variables that are used throughout the routines to stay connected with the Db2 database. These variables are:- hdbc - The connection handle to the database- hstmt - A statement handle used for executing SQL statements- connected - A flag that tells the program whether or not we are currently connected to a database- runtime - Used to tell %sql the length of time (default 1 second) to run a statement when timing itThe only database driver that is used in this program is the IBM DB2 ODBC DRIVER. This driver needs to be loaded on the system that is connecting to Db2. The Jupyter notebook that is built by this system installs the driver for you so you shouldn't have to do anything other than build the container.If the connection is successful, the connected flag is set to True. Any subsequent %sql call will check to see if you are connected and initiate another prompted connection if you do not have a connection to a database.
###Code
def db2_doConnect():
global _hdbc, _hdbi, _connected, _runtime
global _settings
if _connected == False:
if len(_settings["database"]) == 0:
return False
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;"
"UID={3};"
"PWD={4};{5}").format(_settings["database"],
_settings["hostname"],
_settings["port"],
_settings["uid"],
_settings["pwd"],
_settings["ssl"])
# Get a database handle (hdbc) and a statement handle (hstmt) for subsequent access to DB2
try:
_hdbc = ibm_db.connect(dsn, "", "")
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
try:
_hdbi = ibm_db_dbi.Connection(_hdbc)
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
_connected = True
# Save the values for future use
save_settings()
success("Connection successful.")
return True
###Output
_____no_output_____
###Markdown
Load/Save SettingsThere are two routines that load and save settings between Jupyter notebooks. These routines are called without any parameters.```load_settings() save_settings()```There is a global structure called settings which contains the following fields:```_settings = { "maxrows" : 10, "maxgrid" : 5, "runtime" : 1, "display" : "TEXT", "database" : "", "hostname" : "localhost", "port" : "50000", "protocol" : "TCPIP", "uid" : "DB2INST1", "pwd" : "password"}```The information in the settings structure is used for re-connecting to a database when you start up a Jupyter notebook. When the session is established for the first time, the load_settings() function is called to get the contents of the pickle file (db2connect.pickle, a Jupyter session file) that will be used for the first connection to the database. Whenever a new connection is made, the file is updated with the save_settings() function.
###Code
def save_settings():
# This routine will save the current settings if they exist
global _settings
fname = "db2connect.pickle"
def load_settings():
# This routine will load the settings from the previous session if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'rb') as f:
_settings = pickle.load(f)
# Reset runtime to 1 since it would be unexpected to keep the same value between connections
_settings["runtime"] = 1
_settings["maxgrid"] = 5
except:
pass
return
def save_settings():
# This routine will save the current settings if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'wb') as f:
pickle.dump(_settings,f)
except:
errormsg("Failed trying to write Db2 Configuration Information.")
return
try:
with open(fname,'wb') as f:
pickle.dump(_settings,f)
except:
errormsg("Failed trying to write Db2 Configuration Information.")
return
###Output
_____no_output_____
###Markdown
Error and Message FunctionsThere are three types of messages that are thrown by the %db2 magic command. The first routine will print out a success message with no special formatting:```success(message)```The second message is used for displaying an error message that is not associated with a SQL error. This type of error message is surrounded with a red box to highlight the problem. Note that the success message has code that has been commented out that could also show a successful return code with a green box. ```errormsg(message)```The final error message is based on an error occuring in the SQL code that was executed. This code will parse the message returned from the ibm_db interface and parse it to return only the error message portion (and not all of the wrapper code from the driver).```db2_error(quiet,connect=False)```The quiet flag is passed to the db2_error routine so that messages can be suppressed if the user wishes to ignore them with the -q flag. A good example of this is dropping a table that does not exist. We know that an error will be thrown so we can ignore it. The information that the db2_error routine gets is from the stmt_errormsg() function from within the ibm_db driver. The db2_error function should only be called after a SQL failure otherwise there will be no diagnostic information returned from stmt_errormsg().If the connect flag is True, the routine will get the SQLSTATE and SQLCODE from the connection error message rather than a statement error message.
###Code
def db2_error(quiet,connect=False):
global sqlerror, sqlcode, sqlstate, _environment
try:
if (connect == False):
errmsg = ibm_db.stmt_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
else:
errmsg = ibm_db.conn_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
sqlerror = errmsg
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
except:
errmsg = "Unknown error."
sqlcode = -99999
sqlstate = "-99999"
sqlerror = errmsg
return
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
if quiet == True: return
if (errmsg == ""): return
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html+errmsg+"</p>"))
else:
print(errmsg)
# Print out an error message
def errormsg(message):
global _environment
if (message != ""):
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + message + "</p>"))
else:
print(message)
def success(message):
if (message != ""):
print(message)
return
def debug(message,error=False):
global _environment
if (_environment["jupyter"] == True):
spacer = "<br>" + " "
else:
spacer = "\n "
if (message != ""):
lines = message.split('\n')
msg = ""
indent = 0
for line in lines:
delta = line.count("(") - line.count(")")
if (msg == ""):
msg = line
indent = indent + delta
else:
if (delta < 0): indent = indent + delta
msg = msg + spacer * (indent*2) + line
if (delta > 0): indent = indent + delta
if (indent < 0): indent = 0
if (error == True):
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
else:
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#008000; background-color:#e6ffe6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + msg + "</pre></p>"))
else:
print(msg)
return
###Output
_____no_output_____
###Markdown
Macro ProcessorA macro is used to generate SQL to be executed by overriding or creating a new keyword. For instance, the base `%sql` command does not understand the `LIST TABLES` command which is usually used in conjunction with the `CLP` processor. Rather than specifically code this in the base `db2.ipynb` file, we can create a macro that can execute this code for us.There are three routines that deal with macros. - checkMacro is used to find the macro calls in a string. All macros are sent to parseMacro for checking.- runMacro will evaluate the macro and return the string to the parse- subvars is used to track the variables used as part of a macro call.- setMacro is used to catalog a macro Set MacroThis code will catalog a macro call.
###Code
def setMacro(inSQL,parms):
global _macros
names = parms.split()
if (len(names) < 2):
errormsg("No command name supplied.")
return None
macroName = names[1].upper()
_macros[macroName] = inSQL
return
###Output
_____no_output_____
###Markdown
Check MacroThis code will check to see if there is a macro command in the SQL. It will take the SQL that is supplied and strip out three values: the first and second keywords, and the remainder of the parameters.For instance, consider the following statement:```CREATE DATABASE GEORGE options....```The name of the macro that we want to run is called `CREATE`. We know that there is a SQL command called `CREATE` but this code will call the macro first to see if needs to run any special code. For instance, `CREATE DATABASE` is not part of the `db2.ipynb` syntax, but we can add it in by using a macro.The check macro logic will strip out the subcommand (`DATABASE`) and place the remainder of the string after `DATABASE` in options.
###Code
def checkMacro(in_sql):
global _macros
if (len(in_sql) == 0): return(in_sql) # Nothing to do
tokens = parseArgs(in_sql,None) # Take the string and reduce into tokens
macro_name = tokens[0].upper() # Uppercase the name of the token
if (macro_name not in _macros):
return(in_sql) # No macro by this name so just return the string
result = runMacro(_macros[macro_name],in_sql,tokens) # Execute the macro using the tokens we found
return(result) # Runmacro will either return the original SQL or the new one
###Output
_____no_output_____
###Markdown
Parse Call ArgumentsThis code will parse a SQL call name(parm1,...) and return the name and the parameters in the call.
###Code
def parseCallArgs(macro):
quoteChar = ""
inQuote = False
inParm = False
name = ""
parms = []
parm = ''
sql = macro
for ch in macro:
if (inParm == False):
if (ch in ["("," ","\n"]):
inParm = True
else:
name = name + ch
else:
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
#if (quoteChar == "]"):
# parm = parm + "'"
else:
parm = parm + ch
elif (ch in ("\"","\'","[")): # Do we have a quote
if (ch == "["):
# parm = parm + "'"
quoteChar = "]"
else:
quoteChar = ch
inQuote = True
elif (ch == ")"):
if (parm != ""):
parm_name, parm_value = splitassign(parm)
parms.append([parm_name,parm_value])
parm = ""
break
elif (ch == ","):
if (parm != ""):
parm_name, parm_value = splitassign(parm)
parms.append([parm_name,parm_value])
else:
parms.append(["null","null"])
parm = ""
else:
parm = parm + ch
if (inParm == True):
if (parm != ""):
parm_name, parm_value = splitassign(parm)
parms.append([parm_name,parm_value])
return(name,parms)
###Output
_____no_output_____
###Markdown
Split AssignmentThis routine will return the name of a variable and it's value when the format is x=y. If y is enclosed in quotes, the quotes are removed.
###Code
def splitassign(arg):
var_name = "null"
var_value = "null"
arg = arg.strip()
eq = arg.find("=")
if (eq != -1):
var_name = arg[:eq].strip()
temp_value = arg[eq+1:].strip()
if (temp_value != ""):
ch = temp_value[0]
if (ch in ["'",'"']):
if (temp_value[-1:] == ch):
var_value = temp_value[1:-1]
else:
var_value = temp_value
else:
var_value = temp_value
else:
var_value = arg
return var_name, var_value
###Output
_____no_output_____
###Markdown
Parse Args The commands that are used in the macros need to be parsed into their separate tokens. The tokens are separated by blanks and strings that enclosed in quotes are kept together.
###Code
def parseArgs(argin,_vars):
quoteChar = ""
inQuote = False
inArg = True
args = []
arg = ''
for ch in argin.lstrip():
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
arg = arg + ch #z
else:
arg = arg + ch
elif (ch == "\"" or ch == "\'"): # Do we have a quote
quoteChar = ch
arg = arg + ch #z
inQuote = True
elif (ch == " "):
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
else:
args.append("null")
arg = ""
else:
arg = arg + ch
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
return(args)
###Output
_____no_output_____
###Markdown
Run MacroThis code will execute the body of the macro and return the results for that macro call.
###Code
def runMacro(script,in_sql,tokens):
result = ""
runIT = True
code = script.split("\n")
level = 0
runlevel = [True,False,False,False,False,False,False,False,False,False]
ifcount = 0
_vars = {}
for i in range(0,len(tokens)):
vstr = str(i)
_vars[vstr] = tokens[i]
if (len(tokens) == 0):
_vars["argc"] = "0"
else:
_vars["argc"] = str(len(tokens)-1)
for line in code:
line = line.strip()
if (line == "" or line == "\n"): continue
if (line[0] == "#"): continue # A comment line starts with a # in the first position of the line
args = parseArgs(line,_vars) # Get all of the arguments
if (args[0] == "if"):
ifcount = ifcount + 1
if (runlevel[level] == False): # You can't execute this statement
continue
level = level + 1
if (len(args) < 4):
print("Macro: Incorrect number of arguments for the if clause.")
return insql
arg1 = args[1]
arg2 = args[3]
if (len(arg2) > 2):
ch1 = arg2[0]
ch2 = arg2[-1:]
if (ch1 in ['"',"'"] and ch1 == ch2):
arg2 = arg2[1:-1].strip()
op = args[2]
if (op in ["=","=="]):
if (arg1 == arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<=","=<"]):
if (arg1 <= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">=","=>"]):
if (arg1 >= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<>","!="]):
if (arg1 != arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<"]):
if (arg1 < arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">"]):
if (arg1 > arg2):
runlevel[level] = True
else:
runlevel[level] = False
else:
print("Macro: Unknown comparison operator in the if statement:" + op)
continue
elif (args[0] in ["exit","echo"] and runlevel[level] == True):
msg = ""
for msgline in args[1:]:
if (msg == ""):
msg = subvars(msgline,_vars)
else:
msg = msg + " " + subvars(msgline,_vars)
if (msg != ""):
if (args[0] == "echo"):
debug(msg,error=False)
else:
debug(msg,error=True)
if (args[0] == "exit"): return ''
elif (args[0] == "pass" and runlevel[level] == True):
pass
elif (args[0] == "var" and runlevel[level] == True):
value = ""
for val in args[2:]:
if (value == ""):
value = subvars(val,_vars)
else:
value = value + " " + subvars(val,_vars)
value.strip()
_vars[args[1]] = value
elif (args[0] == 'else'):
if (ifcount == level):
runlevel[level] = not runlevel[level]
elif (args[0] == 'return' and runlevel[level] == True):
return(result)
elif (args[0] == "endif"):
ifcount = ifcount - 1
if (ifcount < level):
level = level - 1
if (level < 0):
print("Macro: Unmatched if/endif pairs.")
return ''
else:
if (runlevel[level] == True):
if (result == ""):
result = subvars(line,_vars)
else:
result = result + "\n" + subvars(line,_vars)
return(result)
###Output
_____no_output_____
###Markdown
Substitute VarsThis routine is used by the runMacro program to track variables that are used within Macros. These are kept separate from the rest of the code.
###Code
def subvars(script,_vars):
if (_vars == None): return script
remainder = script
result = ""
done = False
while done == False:
bv = remainder.find("{")
if (bv == -1):
done = True
continue
ev = remainder.find("}")
if (ev == -1):
done = True
continue
result = result + remainder[:bv]
vvar = remainder[bv+1:ev]
remainder = remainder[ev+1:]
upper = False
allvars = False
if (vvar[0] == "^"):
upper = True
vvar = vvar[1:]
elif (vvar[0] == "*"):
vvar = vvar[1:]
allvars = True
else:
pass
if (vvar in _vars):
if (upper == True):
items = _vars[vvar].upper()
elif (allvars == True):
try:
iVar = int(vvar)
except:
return(script)
items = ""
sVar = str(iVar)
while sVar in _vars:
if (items == ""):
items = _vars[sVar]
else:
items = items + " " + _vars[sVar]
iVar = iVar + 1
sVar = str(iVar)
else:
items = _vars[vvar]
else:
if (allvars == True):
items = ""
else:
items = "null"
result = result + items
if (remainder != ""):
result = result + remainder
return(result)
###Output
_____no_output_____
###Markdown
SQL TimerThe calling format of this routine is:```count = sqlTimer(hdbc, runtime, inSQL)```This code runs the SQL string multiple times for one second (by default). The accuracy of the clock is not that great when you are running just one statement, so instead this routine will run the code multiple times for a second to give you an execution count. If you need to run the code for more than one second, the runtime value needs to be set to the number of seconds you want the code to run.The return result is always the number of times that the code executed. Note, that the program will skip reading the data if it is a SELECT statement so it doesn't included fetch time for the answer set.
###Code
def sqlTimer(hdbc, runtime, inSQL):
count = 0
t_end = time.time() + runtime
while time.time() < t_end:
try:
stmt = ibm_db.exec_immediate(hdbc,inSQL)
if (stmt == False):
db2_error(flag(["-q","-quiet"]))
return(-1)
ibm_db.free_result(stmt)
except Exception as err:
db2_error(False)
return(-1)
count = count + 1
return(count)
###Output
_____no_output_____
###Markdown
Split ArgsThis routine takes as an argument a string and then splits the arguments according to the following logic:* If the string starts with a `(` character, it will check the last character in the string and see if it is a `)` and then remove those characters* Every parameter is separated by a comma `,` and commas within quotes are ignored* Each parameter returned will have three values returned - one for the value itself, an indicator which will be either True if it was quoted, or False if not, and True or False if it is numeric.Example:``` "abcdef",abcdef,456,"856"```Three values would be returned:```[abcdef,True,False],[abcdef,False,False],[456,False,True],[856,True,False]```Any quoted string will be False for numeric. The way that the parameters are handled are up to the calling program. However, in the case of Db2, the quoted strings must be in single quotes so any quoted parameter using the double quotes `"` must be wrapped with single quotes. There is always a possibility that a string contains single quotes (i.e. O'Connor) so any substituted text should use `''` so that Db2 can properly interpret the string. This routine does not adjust the strings with quotes, and depends on the variable subtitution routine to do that.
###Code
def splitargs(arguments):
import types
# String the string and remove the ( and ) characters if they at the beginning and end of the string
results = []
step1 = arguments.strip()
if (len(step1) == 0): return(results) # Not much to do here - no args found
if (step1[0] == '('):
if (step1[-1:] == ')'):
step2 = step1[1:-1]
step2 = step2.strip()
else:
step2 = step1
else:
step2 = step1
# Now we have a string without brackets. Start scanning for commas
quoteCH = ""
pos = 0
arg = ""
args = []
while pos < len(step2):
ch = step2[pos]
if (quoteCH == ""): # Are we in a quote?
if (ch in ('"',"'")): # Check to see if we are starting a quote
quoteCH = ch
arg = arg + ch
pos += 1
elif (ch == ","): # Are we at the end of a parameter?
arg = arg.strip()
args.append(arg)
arg = ""
inarg = False
pos += 1
else: # Continue collecting the string
arg = arg + ch
pos += 1
else:
if (ch == quoteCH): # Are we at the end of a quote?
arg = arg + ch # Add the quote to the string
pos += 1 # Increment past the quote
quoteCH = "" # Stop quote checking (maybe!)
else:
pos += 1
arg = arg + ch
if (quoteCH != ""): # So we didn't end our string
arg = arg.strip()
args.append(arg)
elif (arg != ""): # Something left over as an argument
arg = arg.strip()
args.append(arg)
else:
pass
results = []
for arg in args:
result = []
if (len(arg) > 0):
if (arg[0] in ('"',"'")):
value = arg[1:-1]
isString = True
isNumber = False
else:
isString = False
isNumber = False
try:
value = eval(arg)
if (type(value) == int):
isNumber = True
elif (isinstance(value,float) == True):
isNumber = True
else:
value = arg
except:
value = arg
else:
value = ""
isString = False
isNumber = False
result = [value,isString,isNumber]
results.append(result)
return results
###Output
_____no_output_____
###Markdown
SQL ParserThe calling format of this routine is:```sql_cmd, parameter_list, encoded_sql = sqlParser(sql_input)```This code will look at the SQL string that has been passed to it and parse it into four values:- sql_cmd: First command in the list (so this may not be the actual SQL command)- parameter_list: the values of the parameters that need to passed to the execute/pandas code- encoded_sql: SQL with the parameters removed if there are any (replaced with ? markers)
###Code
def sqlParser(sqlin,local_ns):
sql_cmd = ""
encoded_sql = sqlin
firstCommand = "(?:^\s*)([a-zA-Z]+)(?:\s+.*|$)"
findFirst = re.match(firstCommand,sqlin)
if (findFirst == None): # We did not find a match so we just return the empty string
return sql_cmd, encoded_sql
cmd = findFirst.group(1)
sql_cmd = cmd.upper()
#
# Scan the input string looking for variables in the format :var. If no : is found just return.
# Var must be alpha+number+_ to be valid
#
if (':' not in sqlin): # A quick check to see if parameters are in here, but not fool-proof!
return sql_cmd, encoded_sql
inVar = False
inQuote = ""
varName = ""
encoded_sql = ""
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
for ch in sqlin:
if (inVar == True): # We are collecting the name of a variable
if (ch.upper() in "@_ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789[]"):
varName = varName + ch
continue
else:
if (varName == ""):
encode_sql = encoded_sql + ":"
elif (varName[0] in ('[',']')):
encoded_sql = encoded_sql + ":" + varName
else:
if (ch == '.'): # If the variable name is stopped by a period, assume no quotes are used
flag_quotes = False
else:
flag_quotes = True
varValue, varType = getContents(varName,flag_quotes,local_ns)
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == RAW):
encoded_sql = encoded_sql + varValue
elif (varType == LIST):
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
flag_quotes = True
try:
if (v.find('0x') == 0): # Just guessing this is a hex value at beginning
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
encoded_sql = encoded_sql + ch
varName = ""
inVar = False
elif (inQuote != ""):
encoded_sql = encoded_sql + ch
if (ch == inQuote): inQuote = ""
elif (ch in ("'",'"')):
encoded_sql = encoded_sql + ch
inQuote = ch
elif (ch == ":"): # This might be a variable
varName = ""
inVar = True
else:
encoded_sql = encoded_sql + ch
if (inVar == True):
varValue, varType = getContents(varName,True,local_ns) # We assume the end of a line is quoted
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == LIST):
flag_quotes = True
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
try:
if (v.find('0x') == 0): # Just guessing this is a hex value
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
return sql_cmd, encoded_sql
###Output
_____no_output_____
###Markdown
Variable Contents FunctionThe calling format of this routine is:```value = getContents(varName,quote,name_space)```This code will take the name of a variable as input and return the contents of that variable. If the variable is not found then the program will return None which is the equivalent to empty or null. Note that this function looks at the global variable pool for Python so it is possible that the wrong version of variable is returned if it is used in different functions. For this reason, any variables used in SQL statements should use a unique namimg convention if possible.The other thing that this function does is replace single quotes with two quotes. The reason for doing this is that Db2 will convert two single quotes into one quote when dealing with strings. This avoids problems when dealing with text that contains multiple quotes within the string. Note that this substitution is done only for single quote characters since the double quote character is used by Db2 for naming columns that are case sensitive or contain special characters.If the quote value is True, the field will have quotes around it. The name_space is the variables currently that are registered in Python.
###Code
def getContents(varName,flag_quotes,local_ns):
#
# Get the contents of the variable name that is passed to the routine. Only simple
# variables are checked, i.e. arrays and lists are not parsed
#
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
DICT = 4
try:
value = eval(varName,None,local_ns) # globals()[varName] # eval(varName)
except:
return(None,STRING)
if (isinstance(value,dict) == True): # Check to see if this is JSON dictionary
return(addquotes(value,flag_quotes),STRING)
elif(isinstance(value,list) == True): # List - tricky
return(value,LIST)
elif (isinstance(value,int) == True): # Integer value
return(value,NUMBER)
elif (isinstance(value,float) == True): # Float value
return(value,NUMBER)
else:
try:
# The pattern needs to be in the first position (0 in Python terms)
if (value.find('0x') == 0): # Just guessing this is a hex value
return(value,RAW)
else:
return(addquotes(value,flag_quotes),STRING) # String
except:
return(addquotes(str(value),flag_quotes),RAW)
###Output
_____no_output_____
###Markdown
Add QuotesQuotes are a challenge when dealing with dictionaries and Db2. Db2 wants strings delimited with single quotes, while Dictionaries use double quotes. That wouldn't be a problems except imbedded single quotes within these dictionaries will cause things to fail. This routine attempts to double-quote the single quotes within the dicitonary.
###Code
def addquotes(inString,flag_quotes):
if (isinstance(inString,dict) == True): # Check to see if this is JSON dictionary
serialized = json.dumps(inString)
else:
serialized = inString
# Replace single quotes with '' (two quotes) and wrap everything in single quotes
if (flag_quotes == False):
return(serialized)
else:
return("'"+serialized.replace("'","''")+"'") # Convert single quotes to two single quotes
###Output
_____no_output_____
###Markdown
Create the SAMPLE Database TablesThe calling format of this routine is:```db2_create_sample(quiet)```There are a lot of examples that depend on the data within the SAMPLE database. If you are running these examples and the connection is not to the SAMPLE database, then this code will create the two (EMPLOYEE, DEPARTMENT) tables that are used by most examples. If the function finds that these tables already exist, then nothing is done. If the tables are missing then they will be created with the same data as in the SAMPLE database.The quiet flag tells the program not to print any messages when the creation of the tables is complete.
###Code
def db2_create_sample(quiet):
create_department = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='DEPARTMENT' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE DEPARTMENT(DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6),ADMRDEPT CHAR(3) NOT NULL)');
EXECUTE IMMEDIATE('INSERT INTO DEPARTMENT VALUES
(''A00'',''SPIFFY COMPUTER SERVICE DIV.'',''000010'',''A00''),
(''B01'',''PLANNING'',''000020'',''A00''),
(''C01'',''INFORMATION CENTER'',''000030'',''A00''),
(''D01'',''DEVELOPMENT CENTER'',NULL,''A00''),
(''D11'',''MANUFACTURING SYSTEMS'',''000060'',''D01''),
(''D21'',''ADMINISTRATION SYSTEMS'',''000070'',''D01''),
(''E01'',''SUPPORT SERVICES'',''000050'',''A00''),
(''E11'',''OPERATIONS'',''000090'',''E01''),
(''E21'',''SOFTWARE SUPPORT'',''000100'',''E01''),
(''F22'',''BRANCH OFFICE F2'',NULL,''E01''),
(''G22'',''BRANCH OFFICE G2'',NULL,''E01''),
(''H22'',''BRANCH OFFICE H2'',NULL,''E01''),
(''I22'',''BRANCH OFFICE I2'',NULL,''E01''),
(''J22'',''BRANCH OFFICE J2'',NULL,''E01'')');
END IF;
END"""
%sql -d -q {create_department}
create_employee = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='EMPLOYEE' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE EMPLOYEE(
EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1),
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3),
PHONENO CHAR(4),
HIREDATE DATE,
JOB CHAR(8),
EDLEVEL SMALLINT NOT NULL,
SEX CHAR(1),
BIRTHDATE DATE,
SALARY DECIMAL(9,2),
BONUS DECIMAL(9,2),
COMM DECIMAL(9,2)
)');
EXECUTE IMMEDIATE('INSERT INTO EMPLOYEE VALUES
(''000010'',''CHRISTINE'',''I'',''HAAS'' ,''A00'',''3978'',''1995-01-01'',''PRES '',18,''F'',''1963-08-24'',152750.00,1000.00,4220.00),
(''000020'',''MICHAEL'' ,''L'',''THOMPSON'' ,''B01'',''3476'',''2003-10-10'',''MANAGER '',18,''M'',''1978-02-02'',94250.00,800.00,3300.00),
(''000030'',''SALLY'' ,''A'',''KWAN'' ,''C01'',''4738'',''2005-04-05'',''MANAGER '',20,''F'',''1971-05-11'',98250.00,800.00,3060.00),
(''000050'',''JOHN'' ,''B'',''GEYER'' ,''E01'',''6789'',''1979-08-17'',''MANAGER '',16,''M'',''1955-09-15'',80175.00,800.00,3214.00),
(''000060'',''IRVING'' ,''F'',''STERN'' ,''D11'',''6423'',''2003-09-14'',''MANAGER '',16,''M'',''1975-07-07'',72250.00,500.00,2580.00),
(''000070'',''EVA'' ,''D'',''PULASKI'' ,''D21'',''7831'',''2005-09-30'',''MANAGER '',16,''F'',''2003-05-26'',96170.00,700.00,2893.00),
(''000090'',''EILEEN'' ,''W'',''HENDERSON'' ,''E11'',''5498'',''2000-08-15'',''MANAGER '',16,''F'',''1971-05-15'',89750.00,600.00,2380.00),
(''000100'',''THEODORE'' ,''Q'',''SPENSER'' ,''E21'',''0972'',''2000-06-19'',''MANAGER '',14,''M'',''1980-12-18'',86150.00,500.00,2092.00),
(''000110'',''VINCENZO'' ,''G'',''LUCCHESSI'' ,''A00'',''3490'',''1988-05-16'',''SALESREP'',19,''M'',''1959-11-05'',66500.00,900.00,3720.00),
(''000120'',''SEAN'' ,'' '',''O`CONNELL'' ,''A00'',''2167'',''1993-12-05'',''CLERK '',14,''M'',''1972-10-18'',49250.00,600.00,2340.00),
(''000130'',''DELORES'' ,''M'',''QUINTANA'' ,''C01'',''4578'',''2001-07-28'',''ANALYST '',16,''F'',''1955-09-15'',73800.00,500.00,1904.00),
(''000140'',''HEATHER'' ,''A'',''NICHOLLS'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''000150'',''BRUCE'' ,'' '',''ADAMSON'' ,''D11'',''4510'',''2002-02-12'',''DESIGNER'',16,''M'',''1977-05-17'',55280.00,500.00,2022.00),
(''000160'',''ELIZABETH'',''R'',''PIANKA'' ,''D11'',''3782'',''2006-10-11'',''DESIGNER'',17,''F'',''1980-04-12'',62250.00,400.00,1780.00),
(''000170'',''MASATOSHI'',''J'',''YOSHIMURA'' ,''D11'',''2890'',''1999-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',44680.00,500.00,1974.00),
(''000180'',''MARILYN'' ,''S'',''SCOUTTEN'' ,''D11'',''1682'',''2003-07-07'',''DESIGNER'',17,''F'',''1979-02-21'',51340.00,500.00,1707.00),
(''000190'',''JAMES'' ,''H'',''WALKER'' ,''D11'',''2986'',''2004-07-26'',''DESIGNER'',16,''M'',''1982-06-25'',50450.00,400.00,1636.00),
(''000200'',''DAVID'' ,'' '',''BROWN'' ,''D11'',''4501'',''2002-03-03'',''DESIGNER'',16,''M'',''1971-05-29'',57740.00,600.00,2217.00),
(''000210'',''WILLIAM'' ,''T'',''JONES'' ,''D11'',''0942'',''1998-04-11'',''DESIGNER'',17,''M'',''2003-02-23'',68270.00,400.00,1462.00),
(''000220'',''JENNIFER'' ,''K'',''LUTZ'' ,''D11'',''0672'',''1998-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',49840.00,600.00,2387.00),
(''000230'',''JAMES'' ,''J'',''JEFFERSON'' ,''D21'',''2094'',''1996-11-21'',''CLERK '',14,''M'',''1980-05-30'',42180.00,400.00,1774.00),
(''000240'',''SALVATORE'',''M'',''MARINO'' ,''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''2002-03-31'',48760.00,600.00,2301.00),
(''000250'',''DANIEL'' ,''S'',''SMITH'' ,''D21'',''0961'',''1999-10-30'',''CLERK '',15,''M'',''1969-11-12'',49180.00,400.00,1534.00),
(''000260'',''SYBIL'' ,''P'',''JOHNSON'' ,''D21'',''8953'',''2005-09-11'',''CLERK '',16,''F'',''1976-10-05'',47250.00,300.00,1380.00),
(''000270'',''MARIA'' ,''L'',''PEREZ'' ,''D21'',''9001'',''2006-09-30'',''CLERK '',15,''F'',''2003-05-26'',37380.00,500.00,2190.00),
(''000280'',''ETHEL'' ,''R'',''SCHNEIDER'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1976-03-28'',36250.00,500.00,2100.00),
(''000290'',''JOHN'' ,''R'',''PARKER'' ,''E11'',''4502'',''2006-05-30'',''OPERATOR'',12,''M'',''1985-07-09'',35340.00,300.00,1227.00),
(''000300'',''PHILIP'' ,''X'',''SMITH'' ,''E11'',''2095'',''2002-06-19'',''OPERATOR'',14,''M'',''1976-10-27'',37750.00,400.00,1420.00),
(''000310'',''MAUDE'' ,''F'',''SETRIGHT'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''000320'',''RAMLAL'' ,''V'',''MEHTA'' ,''E21'',''9990'',''1995-07-07'',''FIELDREP'',16,''M'',''1962-08-11'',39950.00,400.00,1596.00),
(''000330'',''WING'' ,'' '',''LEE'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''M'',''1971-07-18'',45370.00,500.00,2030.00),
(''000340'',''JASON'' ,''R'',''GOUNOT'' ,''E21'',''5698'',''1977-05-05'',''FIELDREP'',16,''M'',''1956-05-17'',43840.00,500.00,1907.00),
(''200010'',''DIAN'' ,''J'',''HEMMINGER'' ,''A00'',''3978'',''1995-01-01'',''SALESREP'',18,''F'',''1973-08-14'',46500.00,1000.00,4220.00),
(''200120'',''GREG'' ,'' '',''ORLANDO'' ,''A00'',''2167'',''2002-05-05'',''CLERK '',14,''M'',''1972-10-18'',39250.00,600.00,2340.00),
(''200140'',''KIM'' ,''N'',''NATZ'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''200170'',''KIYOSHI'' ,'' '',''YAMAMOTO'' ,''D11'',''2890'',''2005-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',64680.00,500.00,1974.00),
(''200220'',''REBA'' ,''K'',''JOHN'' ,''D11'',''0672'',''2005-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',69840.00,600.00,2387.00),
(''200240'',''ROBERT'' ,''M'',''MONTEVERDE'',''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''1984-03-31'',37760.00,600.00,2301.00),
(''200280'',''EILEEN'' ,''R'',''SCHWARTZ'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1966-03-28'',46250.00,500.00,2100.00),
(''200310'',''MICHELLE'' ,''F'',''SPRINGER'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''200330'',''HELENA'' ,'' '',''WONG'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''F'',''1971-07-18'',35370.00,500.00,2030.00),
(''200340'',''ROY'' ,''R'',''ALONZO'' ,''E21'',''5698'',''1997-07-05'',''FIELDREP'',16,''M'',''1956-05-17'',31840.00,500.00,1907.00)');
END IF;
END"""
%sql -d -q {create_employee}
if (quiet == False): success("Sample tables [EMPLOYEE, DEPARTMENT] created.")
###Output
_____no_output_____
###Markdown
Check optionThis function will return the original string with the option removed, and a flag or true or false of the value is found.```args, flag = checkOption(option_string, option, false_value, true_value)```Options are specified with a -x where x is the character that we are searching for. It may actually be more than one character long like -pb/-pi/etc... The false and true values are optional. By default these are the boolean values of T/F but for some options it could be a character string like ';' versus '@' for delimiters.
###Code
def checkOption(args_in, option, vFalse=False, vTrue=True):
args_out = args_in.strip()
found = vFalse
if (args_out != ""):
if (args_out.find(option) >= 0):
args_out = args_out.replace(option," ")
args_out = args_out.strip()
found = vTrue
return args_out, found
###Output
_____no_output_____
###Markdown
Plot DataThis function will plot the data that is returned from the answer set. The plot value determines how we display the data. 1=Bar, 2=Pie, 3=Line, 4=Interactive.```plotData(flag_plot, hdbi, sql, parms)```The hdbi is the ibm_db_sa handle that is used by pandas dataframes to run the sql. The parms contains any of the parameters required to run the query.
###Code
def plotData(hdbi, sql):
try:
df = pandas.read_sql(sql,hdbi)
except Exception as err:
db2_error(False)
return
if df.empty:
errormsg("No results returned")
return
col_count = len(df.columns)
if flag(["-pb","-bar"]): # Plot 1 = bar chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='bar');
_ = plt.plot();
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
df.plot(kind='bar',x=xlabel,y=ylabel);
_ = plt.plot();
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot.bar();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pp","-pie"]): # Plot 2 = pie chart
if (col_count in (1,2)):
if (col_count == 1):
df.index = df.index + 1
yname = df.columns.values[0]
_ = df.plot(kind='pie',y=yname);
else:
xlabel = df.columns.values[0]
xname = df[xlabel].tolist()
yname = df.columns.values[1]
_ = df.plot(kind='pie',y=yname,labels=xname);
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pl","-line"]): # Plot 3 = line chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='line');
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
_ = df.plot(kind='line',x=xlabel,y=ylabel) ;
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot();
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
else:
return
###Output
_____no_output_____
###Markdown
Find a ProcedureThis routine will check to see if a procedure exists with the SCHEMA/NAME (or just NAME if no schema is supplied) and returns the number of answer sets returned. Possible values are 0, 1 (or greater) or None. If None is returned then we can't find the procedure anywhere.
###Code
def findProc(procname):
global _hdbc, _hdbi, _connected, _runtime
# Split the procedure name into schema.procname if appropriate
upper_procname = procname.upper()
schema, proc = split_string(upper_procname,".") # Expect schema.procname
if (proc == None):
proc = schema
# Call ibm_db.procedures to see if the procedure does exist
schema = "%"
try:
stmt = ibm_db.procedures(_hdbc, None, schema, proc)
if (stmt == False): # Error executing the code
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
result = ibm_db.fetch_tuple(stmt)
resultsets = result[5]
if (resultsets >= 1): resultsets = 1
return resultsets
except Exception as err:
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
###Output
_____no_output_____
###Markdown
Get ColumnsGiven a statement handle, determine what the column names are or the data types.
###Code
def getColumns(stmt):
columns = []
types = []
colcount = 0
try:
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
while (colname != False):
columns.append(colname)
types.append(coltype)
colcount += 1
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
return columns,types
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Call a ProcedureThe CALL statement is used for execution of a stored procedure. The format of the CALL statement is:```CALL PROC_NAME(x,y,z,...)```Procedures allow for the return of answer sets (cursors) as well as changing the contents of the parameters being passed to the procedure. In this implementation, the CALL function is limited to returning one answer set (or nothing). If you want to use more complex stored procedures then you will have to use the native python libraries.
###Code
def parseCall(hdbc, inSQL, local_ns):
global _hdbc, _hdbi, _connected, _runtime, _environment
# Check to see if we are connected first
if (_connected == False): # Check if you are connected
db2_doConnect()
if _connected == False: return None
remainder = inSQL.strip()
procName, procArgs = parseCallArgs(remainder[5:]) # Assume that CALL ... is the format
resultsets = findProc(procName)
if (resultsets == None): return None
argvalues = []
if (len(procArgs) > 0): # We have arguments to consider
for arg in procArgs:
varname = arg[1]
if (len(varname) > 0):
if (varname[0] == ":"):
checkvar = varname[1:]
varvalue = getContents(checkvar,True,local_ns)
if (varvalue == None):
errormsg("Variable " + checkvar + " is not defined.")
return None
argvalues.append(varvalue)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
try:
if (len(procArgs) > 0):
argtuple = tuple(argvalues)
result = ibm_db.callproc(_hdbc,procName,argtuple)
stmt = result[0]
else:
result = ibm_db.callproc(_hdbc,procName)
stmt = result
if (resultsets == 1 and stmt != None):
columns, types = getColumns(stmt)
if (columns == None): return None
rows = []
rowlist = ibm_db.fetch_tuple(stmt)
while ( rowlist ) :
row = []
colcount = 0
for col in rowlist:
try:
if (types[colcount] in ["int","bigint"]):
row.append(int(col))
elif (types[colcount] in ["decimal","real"]):
row.append(float(col))
elif (types[colcount] in ["date","time","timestamp"]):
row.append(str(col))
else:
row.append(col)
except:
row.append(col)
colcount += 1
rows.append(row)
rowlist = ibm_db.fetch_tuple(stmt)
if flag(["-r","-array"]):
rows.insert(0,columns)
if len(procArgs) > 0:
allresults = []
allresults.append(rows)
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return rows
else:
df = pandas.DataFrame.from_records(rows,columns=columns)
if flag("-grid") or _settings['display'] == 'GRID':
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
return df
else:
if len(procArgs) > 0:
allresults = []
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return None
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Parse Prepare/ExecuteThe PREPARE statement is used for repeated execution of a SQL statement. The PREPARE statement has the format:```stmt = PREPARE SELECT EMPNO FROM EMPLOYEE WHERE WORKDEPT=? AND SALARY<?```The SQL statement that you want executed is placed after the PREPARE statement with the location of variables marked with ? (parameter) markers. The variable stmt contains the prepared statement that need to be passed to the EXECUTE statement. The EXECUTE statement has the format:```EXECUTE :x USING z, y, s ```The first variable (:x) is the name of the variable that you assigned the results of the prepare statement. The values after the USING clause are substituted into the prepare statement where the ? markers are found. If the values in USING clause are variable names (z, y, s), a **link** is created to these variables as part of the execute statement. If you use the variable subsitution form of variable name (:z, :y, :s), the **contents** of the variable are placed into the USING clause. Normally this would not make much of a difference except when you are dealing with binary strings or JSON strings where the quote characters may cause some problems when subsituted into the statement.
###Code
def parsePExec(hdbc, inSQL):
import ibm_db
global _stmt, _stmtID, _stmtSQL, sqlcode
cParms = inSQL.split()
parmCount = len(cParms)
if (parmCount == 0): return(None) # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "PREPARE"): # Prepare the following SQL
uSQL = inSQL.upper()
found = uSQL.find("PREPARE")
sql = inSQL[found+7:].strip()
try:
pattern = "\?\*[0-9]+"
findparm = re.search(pattern,sql)
while findparm != None:
found = findparm.group(0)
count = int(found[2:])
markers = ('?,' * count)[:-1]
sql = sql.replace(found,markers)
findparm = re.search(pattern,sql)
stmt = ibm_db.prepare(hdbc,sql) # Check error code here
if (stmt == False):
db2_error(False)
return(False)
stmttext = str(stmt).strip()
stmtID = stmttext[33:48].strip()
if (stmtID in _stmtID) == False:
_stmt.append(stmt) # Prepare and return STMT to caller
_stmtID.append(stmtID)
else:
stmtIX = _stmtID.index(stmtID)
_stmt[stmtiX] = stmt
return(stmtID)
except Exception as err:
print(err)
db2_error(False)
return(False)
if (keyword == "EXECUTE"): # Execute the prepare statement
if (parmCount < 2): return(False) # No stmtID available
stmtID = cParms[1].strip()
if (stmtID in _stmtID) == False:
errormsg("Prepared statement not found or invalid.")
return(False)
stmtIX = _stmtID.index(stmtID)
stmt = _stmt[stmtIX]
try:
if (parmCount == 2): # Only the statement handle available
result = ibm_db.execute(stmt) # Run it
elif (parmCount == 3): # Not quite enough arguments
errormsg("Missing or invalid USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
else:
using = cParms[2].upper()
if (using != "USING"): # Bad syntax again
errormsg("Missing USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
uSQL = inSQL.upper()
found = uSQL.find("USING")
parmString = inSQL[found+5:].strip()
parmset = splitargs(parmString)
if (len(parmset) == 0):
errormsg("Missing parameters after the USING clause.")
sqlcode = -99999
return(False)
parms = []
parm_count = 0
CONSTANT = 0
VARIABLE = 1
const = [0]
const_cnt = 0
for v in parmset:
parm_count = parm_count + 1
if (v[1] == True or v[2] == True): # v[1] true if string, v[2] true if num
parm_type = CONSTANT
const_cnt = const_cnt + 1
if (v[2] == True):
if (isinstance(v[0],int) == True): # Integer value
sql_type = ibm_db.SQL_INTEGER
elif (isinstance(v[0],float) == True): # Float value
sql_type = ibm_db.SQL_DOUBLE
else:
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
const.append(v[0])
else:
parm_type = VARIABLE
# See if the variable has a type associated with it varname@type
varset = v[0].split("@")
parm_name = varset[0]
parm_datatype = "char"
# Does the variable exist?
if (parm_name not in globals()):
errormsg("SQL Execute parameter " + parm_name + " not found")
sqlcode = -99999
return(false)
if (len(varset) > 1): # Type provided
parm_datatype = varset[1]
if (parm_datatype == "dec" or parm_datatype == "decimal"):
sql_type = ibm_db.SQL_DOUBLE
elif (parm_datatype == "bin" or parm_datatype == "binary"):
sql_type = ibm_db.SQL_BINARY
elif (parm_datatype == "int" or parm_datatype == "integer"):
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
try:
if (parm_type == VARIABLE):
result = ibm_db.bind_param(stmt, parm_count, globals()[parm_name], ibm_db.SQL_PARAM_INPUT, sql_type)
else:
result = ibm_db.bind_param(stmt, parm_count, const[const_cnt], ibm_db.SQL_PARAM_INPUT, sql_type)
except:
result = False
if (result == False):
errormsg("SQL Bind on variable " + parm_name + " failed.")
sqlcode = -99999
return(false)
result = ibm_db.execute(stmt) # ,tuple(parms))
if (result == False):
errormsg("SQL Execute failed.")
return(False)
if (ibm_db.num_fields(stmt) == 0): return(True) # Command successfully completed
return(fetchResults(stmt))
except Exception as err:
db2_error(False)
return(False)
return(False)
return(False)
###Output
_____no_output_____
###Markdown
Fetch Result SetThis code will take the stmt handle and then produce a result set of rows as either an array (`-r`,`-array`) or as an array of json records (`-json`).
###Code
def fetchResults(stmt):
global sqlcode
rows = []
columns, types = getColumns(stmt)
# By default we assume that the data will be an array
is_array = True
# Check what type of data we want returned - array or json
if (flag(["-r","-array"]) == False):
# See if we want it in JSON format, if not it remains as an array
if (flag("-json") == True):
is_array = False
# Set column names to lowercase for JSON records
if (is_array == False):
columns = [col.lower() for col in columns] # Convert to lowercase for each of access
# First row of an array has the column names in it
if (is_array == True):
rows.append(columns)
result = ibm_db.fetch_tuple(stmt)
rowcount = 0
while (result):
rowcount += 1
if (is_array == True):
row = []
else:
row = {}
colcount = 0
for col in result:
try:
if (types[colcount] in ["int","bigint"]):
if (is_array == True):
row.append(int(col))
else:
row[columns[colcount]] = int(col)
elif (types[colcount] in ["decimal","real"]):
if (is_array == True):
row.append(float(col))
else:
row[columns[colcount]] = float(col)
elif (types[colcount] in ["date","time","timestamp"]):
if (is_array == True):
row.append(str(col))
else:
row[columns[colcount]] = str(col)
else:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
except:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
colcount += 1
rows.append(row)
result = ibm_db.fetch_tuple(stmt)
if (rowcount == 0):
sqlcode = 100
else:
sqlcode = 0
return rows
###Output
_____no_output_____
###Markdown
Parse CommitThere are three possible COMMIT verbs that can bs used:- COMMIT [WORK] - Commit the work in progress - The WORK keyword is not checked for- ROLLBACK - Roll back the unit of work- AUTOCOMMIT ON/OFF - Are statements committed on or off?The statement is passed to this routine and then checked.
###Code
def parseCommit(sql):
global _hdbc, _hdbi, _connected, _runtime, _stmt, _stmtID, _stmtSQL
if (_connected == False): return # Nothing to do if we are not connected
cParms = sql.split()
if (len(cParms) == 0): return # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "COMMIT"): # Commit the work that was done
try:
result = ibm_db.commit (_hdbc) # Commit the connection
if (len(cParms) > 1):
keyword = cParms[1].upper()
if (keyword == "HOLD"):
return
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "ROLLBACK"): # Rollback the work that was done
try:
result = ibm_db.rollback(_hdbc) # Rollback the connection
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "AUTOCOMMIT"): # Is autocommit on or off
if (len(cParms) > 1):
op = cParms[1].upper() # Need ON or OFF value
else:
return
try:
if (op == "OFF"):
ibm_db.autocommit(_hdbc, False)
elif (op == "ON"):
ibm_db.autocommit (_hdbc, True)
return
except Exception as err:
db2_error(False)
return
return
###Output
_____no_output_____
###Markdown
Set FlagsThis code will take the input SQL block and update the global flag list. The global flag list is just a list of options that are set at the beginning of a code block. The absence of a flag means it is false. If it exists it is true.
###Code
def setFlags(inSQL):
global _flags
_flags = [] # Delete all of the current flag settings
pos = 0
end = len(inSQL)-1
inFlag = False
ignore = False
outSQL = ""
flag = ""
while (pos <= end):
ch = inSQL[pos]
if (ignore == True):
outSQL = outSQL + ch
else:
if (inFlag == True):
if (ch != " "):
flag = flag + ch
else:
_flags.append(flag)
inFlag = False
else:
if (ch == "-"):
flag = "-"
inFlag = True
elif (ch == ' '):
outSQL = outSQL + ch
else:
outSQL = outSQL + ch
ignore = True
pos += 1
if (inFlag == True):
_flags.append(flag)
return outSQL
###Output
_____no_output_____
###Markdown
Check to see if flag ExistsThis function determines whether or not a flag exists in the global flag array. Absence of a value means it is false. The parameter can be a single value, or an array of values.
###Code
def flag(inflag):
global _flags
if isinstance(inflag,list):
for x in inflag:
if (x in _flags):
return True
return False
else:
if (inflag in _flags):
return True
else:
return False
###Output
_____no_output_____
###Markdown
Generate a list of SQL lines based on a delimiterNote that this function will make sure that quotes are properly maintained so that delimiters inside of quoted strings do not cause errors.
###Code
def splitSQL(inputString, delimiter):
pos = 0
arg = ""
results = []
quoteCH = ""
inSQL = inputString.strip()
if (len(inSQL) == 0): return(results) # Not much to do here - no args found
while pos < len(inSQL):
ch = inSQL[pos]
pos += 1
if (ch in ('"',"'")): # Is this a quote characters?
arg = arg + ch # Keep appending the characters to the current arg
if (ch == quoteCH): # Is this quote character we are in
quoteCH = ""
elif (quoteCH == ""): # Create the quote
quoteCH = ch
else:
None
elif (quoteCH != ""): # Still in a quote
arg = arg + ch
elif (ch == delimiter): # Is there a delimiter?
results.append(arg)
arg = ""
else:
arg = arg + ch
if (arg != ""):
results.append(arg)
return(results)
###Output
_____no_output_____
###Markdown
Main %sql Magic DefinitionThe main %sql Magic logic is found in this section of code. This code will register the Magic command and allow Jupyter notebooks to interact with Db2 by using this extension.
###Code
@magics_class
class DB2(Magics):
@needs_local_scope
@line_cell_magic
def sql(self, line, cell=None, local_ns=None):
# Before we event get started, check to see if you have connected yet. Without a connection we
# can't do anything. You may have a connection request in the code, so if that is true, we run those,
# otherwise we connect immediately
# If your statement is not a connect, and you haven't connected, we need to do it for you
global _settings, _environment
global _hdbc, _hdbi, _connected, _runtime, sqlstate, sqlerror, sqlcode, sqlelapsed
# If you use %sql (line) we just run the SQL. If you use %%SQL the entire cell is run.
flag_cell = False
flag_output = False
sqlstate = "0"
sqlerror = ""
sqlcode = 0
sqlelapsed = 0
start_time = time.time()
end_time = time.time()
# Macros gets expanded before anything is done
SQL1 = setFlags(line.strip())
SQL1 = checkMacro(SQL1) # Update the SQL if any macros are in there
SQL2 = cell
if flag("-sampledata"): # Check if you only want sample data loaded
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
db2_create_sample(flag(["-q","-quiet"]))
return
if SQL1 == "?" or flag(["-h","-help"]): # Are you asking for help
sqlhelp()
return
if len(SQL1) == 0 and SQL2 == None: return # Nothing to do here
# Check for help
if SQL1.upper() == "? CONNECT": # Are you asking for help on CONNECT
connected_help()
return
sqlType,remainder = sqlParser(SQL1,local_ns) # What type of command do you have?
if (sqlType == "CONNECT"): # A connect request
parseConnect(SQL1,local_ns)
return
elif (sqlType == "DEFINE"): # Create a macro from the body
result = setMacro(SQL2,remainder)
return
elif (sqlType == "OPTION"):
setOptions(SQL1)
return
elif (sqlType == 'COMMIT' or sqlType == 'ROLLBACK' or sqlType == 'AUTOCOMMIT'):
parseCommit(remainder)
return
elif (sqlType == "PREPARE"):
pstmt = parsePExec(_hdbc, remainder)
return(pstmt)
elif (sqlType == "EXECUTE"):
result = parsePExec(_hdbc, remainder)
return(result)
elif (sqlType == "CALL"):
result = parseCall(_hdbc, remainder, local_ns)
return(result)
else:
pass
sql = SQL1
if (sql == ""): sql = SQL2
if (sql == ""): return # Nothing to do here
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
if _settings["maxrows"] == -1: # Set the return result size
pandas.reset_option('display.max_rows')
else:
pandas.options.display.max_rows = _settings["maxrows"]
runSQL = re.sub('.*?--.*$',"",sql,flags=re.M)
remainder = runSQL.replace("\n"," ")
if flag(["-d","-delim"]):
sqlLines = splitSQL(remainder,"@")
else:
sqlLines = splitSQL(remainder,";")
flag_cell = True
# For each line figure out if you run it as a command (db2) or select (sql)
for sqlin in sqlLines: # Run each command
sqlin = checkMacro(sqlin) # Update based on any macros
sqlType, sql = sqlParser(sqlin,local_ns) # Parse the SQL
if (sql.strip() == ""): continue
if flag(["-e","-echo"]): debug(sql,False)
if flag("-t"):
cnt = sqlTimer(_hdbc, _settings["runtime"], sql) # Given the sql and parameters, clock the time
if (cnt >= 0): print("Total iterations in %s second(s): %s" % (_settings["runtime"],cnt))
return(cnt)
elif flag(["-pb","-bar","-pp","-pie","-pl","-line"]): # We are plotting some results
plotData(_hdbi, sql) # Plot the data and return
return
else:
try: # See if we have an answer set
stmt = ibm_db.prepare(_hdbc,sql)
if (ibm_db.num_fields(stmt) == 0): # No, so we just execute the code
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
continue
rowcount = ibm_db.num_rows(stmt)
if (rowcount == 0 and flag(["-q","-quiet"]) == False):
errormsg("No rows found.")
continue # Continue running
elif flag(["-r","-array","-j","-json"]): # raw, json, format json
row_count = 0
resultSet = []
try:
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
return
if flag("-j"): # JSON single output
row_count = 0
json_results = []
while( ibm_db.fetch_row(stmt) ):
row_count = row_count + 1
jsonVal = ibm_db.result(stmt,0)
jsonDict = json.loads(jsonVal)
json_results.append(jsonDict)
flag_output = True
if (row_count == 0): sqlcode = 100
return(json_results)
else:
return(fetchResults(stmt))
except Exception as err:
db2_error(flag(["-q","-quiet"]))
return
else:
try:
df = pandas.read_sql(sql,_hdbi)
except Exception as err:
db2_error(False)
return
if (len(df) == 0):
sqlcode = 100
if (flag(["-q","-quiet"]) == False):
errormsg("No rows found")
continue
flag_output = True
if flag("-grid") or _settings['display'] == 'GRID': # Check to see if we can display the results
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
print(df.to_string())
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
pandas.options.display.max_rows = None
pandas.options.display.max_columns = None
return df # print(df.to_string())
else:
pandas.options.display.max_rows = _settings["maxrows"]
pandas.options.display.max_columns = None
return df # pdisplay(df) # print(df.to_string())
except:
db2_error(flag(["-q","-quiet"]))
continue # return
end_time = time.time()
sqlelapsed = end_time - start_time
if (flag_output == False and flag(["-q","-quiet"]) == False): print("Command completed.")
# Register the Magic extension in Jupyter
ip = get_ipython()
ip.register_magics(DB2)
load_settings()
success("Db2 Extensions Loaded.")
###Output
_____no_output_____
###Markdown
Pre-defined MacrosThese macros are used to simulate the LIST TABLES and DESCRIBE commands that are available from within the Db2 command line.
###Code
%%sql define LIST
#
# The LIST macro is used to list all of the tables in the current schema or for all schemas
#
var syntax Syntax: LIST TABLES [FOR ALL | FOR SCHEMA name]
#
# Only LIST TABLES is supported by this macro
#
if {^1} <> 'TABLES'
exit {syntax}
endif
#
# This SQL is a temporary table that contains the description of the different table types
#
WITH TYPES(TYPE,DESCRIPTION) AS (
VALUES
('A','Alias'),
('G','Created temporary table'),
('H','Hierarchy table'),
('L','Detached table'),
('N','Nickname'),
('S','Materialized query table'),
('T','Table'),
('U','Typed table'),
('V','View'),
('W','Typed view')
)
SELECT TABNAME, TABSCHEMA, T.DESCRIPTION FROM SYSCAT.TABLES S, TYPES T
WHERE T.TYPE = S.TYPE
#
# Case 1: No arguments - LIST TABLES
#
if {argc} == 1
AND OWNER = CURRENT USER
ORDER BY TABNAME, TABSCHEMA
return
endif
#
# Case 2: Need 3 arguments - LIST TABLES FOR ALL
#
if {argc} == 3
if {^2}&{^3} == 'FOR&ALL'
ORDER BY TABNAME, TABSCHEMA
return
endif
exit {syntax}
endif
#
# Case 3: Need FOR SCHEMA something here
#
if {argc} == 4
if {^2}&{^3} == 'FOR&SCHEMA'
AND TABSCHEMA = '{^4}'
ORDER BY TABNAME, TABSCHEMA
return
else
exit {syntax}
endif
endif
#
# Nothing matched - Error
#
exit {syntax}
%%sql define describe
#
# The DESCRIBE command can either use the syntax DESCRIBE TABLE <name> or DESCRIBE TABLE SELECT ...
#
var syntax Syntax: DESCRIBE [TABLE name | SELECT statement]
#
# Check to see what count of variables is... Must be at least 2 items DESCRIBE TABLE x or SELECT x
#
if {argc} < 2
exit {syntax}
endif
CALL ADMIN_CMD('{*0}');
###Output
_____no_output_____
###Markdown
Set the table formatting to left align a table in a cell. By default, tables are centered in a cell. Remove this cell if you don't want to change Jupyter notebook formatting for tables. In addition, we skip this code if you are running in a shell environment rather than a Jupyter notebook
###Code
#%%html
#<style>
# table {margin-left: 0 !important; text-align: left;}
#</style>
###Output
_____no_output_____
###Markdown
DB2 Jupyter Notebook ExtensionsVersion: 2019-10-22 This code is imported as a Jupyter notebook extension in any notebooks you create with DB2 code in it. Place the following line of code in any notebook that you want to use these commands with:&37;run db2.ipynbThis code defines a Jupyter/Python magic command called `%sql` which allows you to execute DB2 specific calls to the database. There are other packages available for manipulating databases, but this one has been specificallydesigned for demonstrating a number of the SQL features available in DB2.There are two ways of executing the `%sql` command. A single line SQL statement would use theline format of the magic command:%sql SELECT * FROM EMPLOYEEIf you have a large block of sql then you would place the %%sql command at the beginning of the block and thenplace the SQL statements into the remainder of the block. Using this form of the `%%sql` statement means that thenotebook cell can only contain SQL and no other statements.%%sqlSELECT * FROM EMPLOYEEORDER BY LASTNAMEYou can have multiple lines in the SQL block (`%%sql`). The default SQL delimiter is the semi-column (`;`).If you have scripts (triggers, procedures, functions) that use the semi-colon as part of the script, you will need to use the `-d` option to change the delimiter to an at "`@`" sign. %%sql -dSELECT * FROM EMPLOYEE@CREATE PROCEDURE ...@The `%sql` command allows most DB2 commands to execute and has a special version of the CONNECT statement. A CONNECT by itself will attempt to reconnect to the database using previously used settings. If it cannot connect, it will prompt the user for additional information. The CONNECT command has the following format:%sql CONNECT TO <database> USER <userid> USING <password | ?> HOST <ip address> PORT <port number>If you use a "`?`" for the password field, the system will prompt you for a password. This avoids typing the password as clear text on the screen. If a connection is not successful, the system will print the errormessage associated with the connect request.If the connection is successful, the parameters are saved on your system and will be used the next time yourun a SQL statement, or when you issue the %sql CONNECT command with no parameters. In addition to the -d option, there are a number different options that you can specify at the beginning of the SQL: - `-d, -delim` - Change SQL delimiter to "`@`" from "`;`" - `-q, -quiet` - Quiet results - no messages returned from the function - `-r, -array` - Return the result set as an array of values instead of a dataframe - `-t, -time` - Time the following SQL statement and return the number of times it executes in 1 second - `-j` - Format the first character column of the result set as a JSON record - `-json` - Return result set as an array of json records - `-a, -all` - Return all rows in answer set and do not limit display - `-grid` - Display the results in a scrollable grid - `-pb, -bar` - Plot the results as a bar chart - `-pl, -line` - Plot the results as a line chart - `-pp, -pie` - Plot the results as a pie chart - `-e, -echo` - Any macro expansions are displayed in an output box - `-sampledata` - Create and load the EMPLOYEE and DEPARTMENT tablesYou can pass python variables to the `%sql` command by using the `{}` braces with the name of thevariable inbetween. Note that you will need to place proper punctuation around the variable in the event theSQL command requires it. For instance, the following example will find employee '000010' in the EMPLOYEE table.empno = '000010'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO='{empno}'The other option is to use parameter markers. What you would need to do is use the name of the variable with a colon in front of it and the program will prepare the statement and then pass the variable to Db2 when the statement is executed. This allows you to create complex strings that might contain quote characters and other special characters and not have to worry about enclosing the string with the correct quotes. Note that you do not place the quotes around the variable even though it is a string.empno = '000020'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=:empno Development SQLThe previous set of `%sql` and `%%sql` commands deals with SQL statements and commands that are run in an interactive manner. There is a class of SQL commands that are more suited to a development environment where code is iterated or requires changing input. The commands that are associated with this form of SQL are:- AUTOCOMMIT- COMMIT/ROLLBACK- PREPARE - EXECUTEAutocommit is the default manner in which SQL statements are executed. At the end of the successful completion of a statement, the results are commited to the database. There is no concept of a transaction where multiple DML/DDL statements are considered one transaction. The `AUTOCOMMIT` command allows you to turn autocommit `OFF` or `ON`. This means that the set of SQL commands run after the `AUTOCOMMIT OFF` command are executed are not commited to the database until a `COMMIT` or `ROLLBACK` command is issued.`COMMIT` (`WORK`) will finalize all of the transactions (`COMMIT`) to the database and `ROLLBACK` will undo all of the changes. If you issue a `SELECT` statement during the execution of your block, the results will reflect all of your changes. If you `ROLLBACK` the transaction, the changes will be lost.`PREPARE` is typically used in a situation where you want to repeatidly execute a SQL statement with different variables without incurring the SQL compilation overhead. For instance:```x = %sql PREPARE SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=?for y in ['000010','000020','000030']: %sql execute :x using :y````EXECUTE` is used to execute a previously compiled statement. To retrieve the error codes that might be associated with any SQL call, the following variables are updated after every call:* SQLCODE* SQLSTATE* SQLERROR - Full error message retrieved from Db2 Install Db2 Python DriverIf the ibm_db driver is not installed on your system, the subsequent Db2 commands will fail. In order to install the Db2 driver, issue the following command from a Jupyter notebook cell:```!pip install --user ibm_db``` Db2 Jupyter ExtensionsThis section of code has the import statements and global variables defined for the remainder of the functions.
###Code
#
# Set up Jupyter MAGIC commands "sql".
# %sql will return results from a DB2 select statement or execute a DB2 command
#
# IBM 2019: George Baklarz
# Version 2019-10-03
#
from __future__ import print_function
from IPython.display import HTML as pHTML, Image as pImage, display as pdisplay, Javascript as Javascript
from IPython.core.magic import (Magics, magics_class, line_magic,
cell_magic, line_cell_magic, needs_local_scope)
import ibm_db
import pandas
import ibm_db_dbi
import json
import matplotlib
import matplotlib.pyplot as plt
import getpass
import os
import pickle
import time
import sys
import re
import warnings
warnings.filterwarnings("ignore")
# Python Hack for Input between 2 and 3
try:
input = raw_input
except NameError:
pass
_settings = {
"maxrows" : 10,
"maxgrid" : 5,
"runtime" : 1,
"display" : "PANDAS",
"database" : "",
"hostname" : "localhost",
"port" : "50000",
"protocol" : "TCPIP",
"uid" : "DB2INST1",
"pwd" : "password"
}
_environment = {
"jupyter" : True,
"qgrid" : True
}
_display = {
'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': False,
'defaultColumnWidth': 150,
'rowHeight': 28,
'enableColumnReorder': False,
'enableTextSelectionOnCells': True,
'editable': False,
'autoEdit': False,
'explicitInitialization': True,
'maxVisibleRows': 5,
'minVisibleRows': 5,
'sortable': True,
'filterable': False,
'highlightSelectedCell': False,
'highlightSelectedRow': True
}
# Connection settings for statements
_connected = False
_hdbc = None
_hdbi = None
_stmt = []
_stmtID = []
_stmtSQL = []
_vars = {}
_macros = {}
_flags = []
_debug = False
# Db2 Error Messages and Codes
sqlcode = 0
sqlstate = "0"
sqlerror = ""
sqlelapsed = 0
# Check to see if QGrid is installed
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
# Check if we are running in iPython or Jupyter
try:
if (get_ipython().config == {}):
_environment['jupyter'] = False
_environment['qgrid'] = False
else:
_environment['jupyter'] = True
except:
_environment['jupyter'] = False
_environment['qgrid'] = False
###Output
_____no_output_____
###Markdown
OptionsThere are four options that can be set with the **`%sql`** command. These options are shown below with the default value shown in parenthesis.- **`MAXROWS n (10)`** - The maximum number of rows that will be displayed before summary information is shown. If the answer set is less than this number of rows, it will be completely shown on the screen. If the answer set is larger than this amount, only the first 5 rows and last 5 rows of the answer set will be displayed. If you want to display a very large answer set, you may want to consider using the grid option `-g` to display the results in a scrollable table. If you really want to show all results then setting MAXROWS to -1 will return all output.- **`MAXGRID n (5)`** - The maximum size of a grid display. When displaying a result set in a grid `-g`, the default size of the display window is 5 rows. You can set this to a larger size so that more rows are shown on the screen. Note that the minimum size always remains at 5 which means that if the system is unable to display your maximum row size it will reduce the table display until it fits.- **`DISPLAY PANDAS | GRID (PANDAS)`** - Display the results as a PANDAS dataframe (default) or as a scrollable GRID- **`RUNTIME n (1)`** - When using the timer option on a SQL statement, the statement will execute for **`n`** number of seconds. The result that is returned is the number of times the SQL statement executed rather than the execution time of the statement. The default value for runtime is one second, so if the SQL is very complex you will need to increase the run time.- **`LIST`** - Display the current settingsTo set an option use the following syntax:```%sql option option_name value option_name value ....```The following example sets all options:```%sql option maxrows 100 runtime 2 display grid maxgrid 10```The values will **not** be saved between Jupyter notebooks sessions. If you need to retrieve the current options values, use the LIST command as the only argument:```%sql option list```
###Code
def setOptions(inSQL):
global _settings, _display
cParms = inSQL.split()
cnt = 0
while cnt < len(cParms):
if cParms[cnt].upper() == 'MAXROWS':
if cnt+1 < len(cParms):
try:
_settings["maxrows"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid MAXROWS value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'MAXGRID':
if cnt+1 < len(cParms):
try:
maxgrid = int(cParms[cnt+1])
if (maxgrid <= 5): # Minimum window size is 5
maxgrid = 5
_display["maxVisibleRows"] = int(cParms[cnt+1])
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
except Exception as err:
errormsg("Invalid MAXGRID value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'RUNTIME':
if cnt+1 < len(cParms):
try:
_settings["runtime"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid RUNTIME value provided.")
pass
cnt = cnt + 1
else:
errormsg("No value provided for the RUNTIME option.")
return
elif cParms[cnt].upper() == 'DISPLAY':
if cnt+1 < len(cParms):
if (cParms[cnt+1].upper() == 'GRID'):
_settings["display"] = 'GRID'
elif (cParms[cnt+1].upper() == 'PANDAS'):
_settings["display"] = 'PANDAS'
else:
errormsg("Invalid DISPLAY value provided.")
cnt = cnt + 1
else:
errormsg("No value provided for the DISPLAY option.")
return
elif (cParms[cnt].upper() == 'LIST'):
print("(MAXROWS) Maximum number of rows displayed: " + str(_settings["maxrows"]))
print("(MAXGRID) Maximum grid display size: " + str(_settings["maxgrid"]))
print("(RUNTIME) How many seconds to a run a statement for performance testing: " + str(_settings["runtime"]))
print("(DISPLAY) Use PANDAS or GRID display format for output: " + _settings["display"])
return
else:
cnt = cnt + 1
save_settings()
###Output
_____no_output_____
###Markdown
SQL HelpThe calling format of this routine is:```sqlhelp()```This code displays help related to the %sql magic command. This help is displayed when you issue a %sql or %%sql command by itself, or use the %sql -h flag.
###Code
def sqlhelp():
global _environment
if (_environment["jupyter"] == True):
sd = '<td style="text-align:left;">'
ed1 = '</td>'
ed2 = '</td>'
sh = '<th style="text-align:left;">'
eh1 = '</th>'
eh2 = '</th>'
sr = '<tr>'
er = '</tr>'
helpSQL = """
<h3>SQL Options</h3>
<p>The following options are available as part of a SQL statement. The options are always preceded with a
minus sign (i.e. -q).
<table>
{sr}
{sh}Option{eh1}{sh}Description{eh2}
{er}
{sr}
{sd}a, all{ed1}{sd}Return all rows in answer set and do not limit display{ed2}
{er}
{sr}
{sd}d{ed1}{sd}Change SQL delimiter to "@" from ";"{ed2}
{er}
{sr}
{sd}e, echo{ed1}{sd}Echo the SQL command that was generated after macro and variable substituion.{ed2}
{er}
{sr}
{sd}h, help{ed1}{sd}Display %sql help information.{ed2}
{er}
{sr}
{sd}j{ed1}{sd}Create a pretty JSON representation. Only the first column is formatted{ed2}
{er}
{sr}
{sd}json{ed1}{sd}Retrieve the result set as a JSON record{ed2}
{er}
{sr}
{sd}pb, bar{ed1}{sd}Plot the results as a bar chart{ed2}
{er}
{sr}
{sd}pl, line{ed1}{sd}Plot the results as a line chart{ed2}
{er}
{sr}
{sd}pp, pie{ed1}{sd}Plot Pie: Plot the results as a pie chart{ed2}
{er}
{sr}
{sd}q, quiet{ed1}{sd}Quiet results - no answer set or messages returned from the function{ed2}
{er}
{sr}
{sd}r, array{ed1}{sd}Return the result set as an array of values{ed2}
{er}
{sr}
{sd}sampledata{ed1}{sd}Create and load the EMPLOYEE and DEPARTMENT tables{ed2}
{er}
{sr}
{sd}t,time{ed1}{sd}Time the following SQL statement and return the number of times it executes in 1 second{ed2}
{er}
{sr}
{sd}grid{ed1}{sd}Display the results in a scrollable grid{ed2}
{er}
</table>
"""
else:
helpSQL = """
SQL Options
The following options are available as part of a SQL statement. Options are always
preceded with a minus sign (i.e. -q).
Option Description
a, all Return all rows in answer set and do not limit display
d Change SQL delimiter to "@" from ";"
e, echo Echo the SQL command that was generated after substitution
h, help Display %sql help information
j Create a pretty JSON representation. Only the first column is formatted
json Retrieve the result set as a JSON record
pb, bar Plot the results as a bar chart
pl, line Plot the results as a line chart
pp, pie Plot Pie: Plot the results as a pie chart
q, quiet Quiet results - no answer set or messages returned from the function
r, array Return the result set as an array of values
sampledata Create and load the EMPLOYEE and DEPARTMENT tables
t,time Time the SQL statement and return the execution count per second
grid Display the results in a scrollable grid
"""
helpSQL = helpSQL.format(**locals())
if (_environment["jupyter"] == True):
pdisplay(pHTML(helpSQL))
else:
print(helpSQL)
###Output
_____no_output_____
###Markdown
Connection HelpThe calling format of this routine is:```connected_help()```This code displays help related to the CONNECT command. This code is displayed when you issue a %sql CONNECT command with no arguments or you are running a SQL statement and there isn't any connection to a database yet.
###Code
def connected_help():
sd = '<td style="text-align:left;">'
ed = '</td>'
sh = '<th style="text-align:left;">'
eh = '</th>'
sr = '<tr>'
er = '</tr>'
if (_environment['jupyter'] == True):
helpConnect = """
<h3>Connecting to Db2</h3>
<p>The CONNECT command has the following format:
<p>
<pre>
%sql CONNECT TO <database> USER <userid> USING <password|?> HOST <ip address> PORT <port number>
%sql CONNECT CREDENTIALS <varname>
%sql CONNECT CLOSE
%sql CONNECT RESET
%sql CONNECT PROMPT - use this to be prompted for values
</pre>
<p>
If you use a "?" for the password field, the system will prompt you for a password. This avoids typing the
password as clear text on the screen. If a connection is not successful, the system will print the error
message associated with the connect request.
<p>
The <b>CREDENTIALS</b> option allows you to use credentials that are supplied by Db2 on Cloud instances.
The credentials can be supplied as a variable and if successful, the variable will be saved to disk
for future use. If you create another notebook and use the identical syntax, if the variable
is not defined, the contents on disk will be used as the credentials. You should assign the
credentials to a variable that represents the database (or schema) that you are communicating with.
Using familiar names makes it easier to remember the credentials when connecting.
<p>
<b>CONNECT CLOSE</b> will close the current connection, but will not reset the database parameters. This means that
if you issue the CONNECT command again, the system should be able to reconnect you to the database.
<p>
<b>CONNECT RESET</b> will close the current connection and remove any information on the connection. You will need
to issue a new CONNECT statement with all of the connection information.
<p>
If the connection is successful, the parameters are saved on your system and will be used the next time you
run an SQL statement, or when you issue the %sql CONNECT command with no parameters.
<p>If you issue CONNECT RESET, all of the current values will be deleted and you will need to
issue a new CONNECT statement.
<p>A CONNECT command without any parameters will attempt to re-connect to the previous database you
were using. If the connection could not be established, the program to prompt you for
the values. To cancel the connection attempt, enter a blank value for any of the values. The connection
panel will request the following values in order to connect to Db2:
<table>
{sr}
{sh}Setting{eh}
{sh}Description{eh}
{er}
{sr}
{sd}Database{ed}{sd}Database name you want to connect to.{ed}
{er}
{sr}
{sd}Hostname{ed}
{sd}Use localhost if Db2 is running on your own machine, but this can be an IP address or host name.
{er}
{sr}
{sd}PORT{ed}
{sd}The port to use for connecting to Db2. This is usually 50000.{ed}
{er}
{sr}
{sd}Userid{ed}
{sd}The userid to use when connecting (usually DB2INST1){ed}
{er}
{sr}
{sd}Password{ed}
{sd}No password is provided so you have to enter a value{ed}
{er}
</table>
"""
else:
helpConnect = """\
Connecting to Db2
The CONNECT command has the following format:
%sql CONNECT TO database USER userid USING password | ?
HOST ip address PORT port number
%sql CONNECT CREDENTIALS varname
%sql CONNECT CLOSE
%sql CONNECT RESET
If you use a "?" for the password field, the system will prompt you for a password.
This avoids typing the password as clear text on the screen. If a connection is
not successful, the system will print the error message associated with the connect
request.
The CREDENTIALS option allows you to use credentials that are supplied by Db2 on
Cloud instances. The credentials can be supplied as a variable and if successful,
the variable will be saved to disk for future use. If you create another notebook
and use the identical syntax, if the variable is not defined, the contents on disk
will be used as the credentials. You should assign the credentials to a variable
that represents the database (or schema) that you are communicating with. Using
familiar names makes it easier to remember the credentials when connecting.
CONNECT CLOSE will close the current connection, but will not reset the database
parameters. This means that if you issue the CONNECT command again, the system
should be able to reconnect you to the database.
CONNECT RESET will close the current connection and remove any information on the
connection. You will need to issue a new CONNECT statement with all of the connection
information.
If the connection is successful, the parameters are saved on your system and will be
used the next time you run an SQL statement, or when you issue the %sql CONNECT
command with no parameters. If you issue CONNECT RESET, all of the current values
will be deleted and you will need to issue a new CONNECT statement.
A CONNECT command without any parameters will attempt to re-connect to the previous
database you were using. If the connection could not be established, the program to
prompt you for the values. To cancel the connection attempt, enter a blank value for
any of the values. The connection panel will request the following values in order
to connect to Db2:
Setting Description
Database Database name you want to connect to
Hostname Use localhost if Db2 is running on your own machine, but this can
be an IP address or host name.
PORT The port to use for connecting to Db2. This is usually 50000.
Userid The userid to use when connecting (usually DB2INST1)
Password No password is provided so you have to enter a value
"""
helpConnect = helpConnect.format(**locals())
if (_environment['jupyter'] == True):
pdisplay(pHTML(helpConnect))
else:
print(helpConnect)
###Output
_____no_output_____
###Markdown
Prompt for Connection InformationIf you are running an SQL statement and have not yet connected to a database, the %sql command will prompt you for connection information. In order to connect to a database, you must supply:- Database name - Host name (IP address or name)- Port number- Userid- Password- Maximum number of rowsThe routine is called without any parameters:```connected_prompt()```
###Code
# Prompt for Connection information
def connected_prompt():
global _settings
_database = ''
_hostname = ''
_port = ''
_uid = ''
_pwd = ''
print("Enter the database connection details (Any empty value will cancel the connection)")
_database = input("Enter the database name: ");
if (_database.strip() == ""): return False
_hostname = input("Enter the HOST IP address or symbolic name: ");
if (_hostname.strip() == ""): return False
_port = input("Enter the PORT number: ");
if (_port.strip() == ""): return False
_uid = input("Enter Userid on the DB2 system: ").upper();
if (_uid.strip() == ""): return False
_pwd = getpass.getpass("Password [password]: ");
if (_pwd.strip() == ""): return False
_settings["database"] = _database.strip()
_settings["hostname"] = _hostname.strip()
_settings["port"] = _port.strip()
_settings["uid"] = _uid.strip()
_settings["pwd"] = _pwd.strip()
_settings["maxrows"] = 10
_settings["maxgrid"] = 5
_settings["runtime"] = 1
return True
# Split port and IP addresses
def split_string(in_port,splitter=":"):
# Split input into an IP address and Port number
global _settings
checkports = in_port.split(splitter)
ip = checkports[0]
if (len(checkports) > 1):
port = checkports[1]
else:
port = None
return ip, port
###Output
_____no_output_____
###Markdown
Connect Syntax ParserThe parseConnect routine is used to parse the CONNECT command that the user issued within the %sql command. The format of the command is:```parseConnect(inSQL)```The inSQL string contains the CONNECT keyword with some additional parameters. The format of the CONNECT command is one of:```CONNECT RESETCONNECT CLOSECONNECT CREDENTIALS CONNECT TO database USER userid USING password HOST hostname PORT portnumber```If you have credentials available from Db2 on Cloud, place the contents of the credentials into a variable and then use the `CONNECT CREDENTIALS ` syntax to connect to the database.In addition, supplying a question mark (?) for password will result in the program prompting you for the password rather than having it as clear text in your scripts.When all of the information is checked in the command, the db2_doConnect function is called to actually do the connection to the database.
###Code
# Parse the CONNECT statement and execute if possible
def parseConnect(inSQL,local_ns):
global _settings, _connected
_connected = False
cParms = inSQL.split()
cnt = 0
while cnt < len(cParms):
if cParms[cnt].upper() == 'TO':
if cnt+1 < len(cParms):
_settings["database"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No database specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'CREDENTIALS':
if cnt+1 < len(cParms):
credentials = cParms[cnt+1]
tempid = getContents(credentials,local_ns)
if (tempid == None):
fname = credentials + ".pickle"
try:
with open(fname,'rb') as f:
_id = pickle.load(f)
except:
errormsg("Unable to find credential variable or file.")
return
else:
_id = json.loads(tempid)
try:
_settings["database"] = _id["db"]
_settings["hostname"] = _id["hostname"]
_settings["port"] = _id["port"]
_settings["uid"] = _id["username"]
_settings["pwd"] = _id["password"]
try:
fname = credentials + ".pickle"
with open(fname,'wb') as f:
pickle.dump(_id,f)
except:
errormsg("Failed trying to write Db2 Credentials.")
return
except:
errormsg("Credentials file is missing information. db/hostname/port/username/password required.")
return
else:
errormsg("No Credentials name supplied")
return
cnt = cnt + 1
elif cParms[cnt].upper() == 'USER':
if cnt+1 < len(cParms):
_settings["uid"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No userid specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'USING':
if cnt+1 < len(cParms):
_settings["pwd"] = cParms[cnt+1]
if (_settings["pwd"] == '?'):
_settings["pwd"] = getpass.getpass("Password [password]: ") or "password"
cnt = cnt + 1
else:
errormsg("No password specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'HOST':
if cnt+1 < len(cParms):
hostport = cParms[cnt+1].upper()
ip, port = split_string(hostport)
if (port == None): _settings["port"] = "50000"
_settings["hostname"] = ip
cnt = cnt + 1
else:
errormsg("No hostname specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PORT':
if cnt+1 < len(cParms):
_settings["port"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No port specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PROMPT':
if (connected_prompt() == False):
print("Connection canceled.")
return
else:
cnt = cnt + 1
elif cParms[cnt].upper() in ('CLOSE','RESET') :
try:
result = ibm_db.close(_hdbc)
_hdbi.close()
except:
pass
success("Connection closed.")
if cParms[cnt].upper() == 'RESET':
_settings["database"] = ''
return
else:
cnt = cnt + 1
_ = db2_doConnect()
###Output
_____no_output_____
###Markdown
Connect to Db2The db2_doConnect routine is called when a connection needs to be established to a Db2 database. The command does not require any parameters since it relies on the settings variable which contains all of the information it needs to connect to a Db2 database.```db2_doConnect()```There are 4 additional variables that are used throughout the routines to stay connected with the Db2 database. These variables are:- hdbc - The connection handle to the database- hstmt - A statement handle used for executing SQL statements- connected - A flag that tells the program whether or not we are currently connected to a database- runtime - Used to tell %sql the length of time (default 1 second) to run a statement when timing itThe only database driver that is used in this program is the IBM DB2 ODBC DRIVER. This driver needs to be loaded on the system that is connecting to Db2. The Jupyter notebook that is built by this system installs the driver for you so you shouldn't have to do anything other than build the container.If the connection is successful, the connected flag is set to True. Any subsequent %sql call will check to see if you are connected and initiate another prompted connection if you do not have a connection to a database.
###Code
def db2_doConnect():
global _hdbc, _hdbi, _connected, _runtime
global _settings
if _connected == False:
if len(_settings["database"]) == 0:
return False
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;"
"UID={3};"
"PWD={4};").format(_settings["database"], _settings["hostname"], _settings["port"], _settings["uid"], _settings["pwd"])
# Get a database handle (hdbc) and a statement handle (hstmt) for subsequent access to DB2
try:
_hdbc = ibm_db.connect(dsn, "", "")
except Exception as err:
errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
try:
_hdbi = ibm_db_dbi.Connection(_hdbc)
except Exception as err:
errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
_connected = True
# Save the values for future use
save_settings()
success("Connection successful.")
return True
###Output
_____no_output_____
###Markdown
Load/Save SettingsThere are two routines that load and save settings between Jupyter notebooks. These routines are called without any parameters.```load_settings() save_settings()```There is a global structure called settings which contains the following fields:```_settings = { "maxrows" : 10, "maxgrid" : 5, "runtime" : 1, "display" : "TEXT", "database" : "", "hostname" : "localhost", "port" : "50000", "protocol" : "TCPIP", "uid" : "DB2INST1", "pwd" : "password"}```The information in the settings structure is used for re-connecting to a database when you start up a Jupyter notebook. When the session is established for the first time, the load_settings() function is called to get the contents of the pickle file (db2connect.pickle, a Jupyter session file) that will be used for the first connection to the database. Whenever a new connection is made, the file is updated with the save_settings() function.
###Code
def load_settings():
# This routine will load the settings from the previous session if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'rb') as f:
_settings = pickle.load(f)
# Reset runtime to 1 since it would be unexpected to keep the same value between connections
_settings["runtime"] = 1
_settings["maxgrid"] = 5
except:
pass
return
def save_settings():
# This routine will save the current settings if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'wb') as f:
pickle.dump(_settings,f)
except:
errormsg("Failed trying to write Db2 Configuration Information.")
return
###Output
_____no_output_____
###Markdown
Error and Message FunctionsThere are three types of messages that are thrown by the %db2 magic command. The first routine will print out a success message with no special formatting:```success(message)```The second message is used for displaying an error message that is not associated with a SQL error. This type of error message is surrounded with a red box to highlight the problem. Note that the success message has code that has been commented out that could also show a successful return code with a green box. ```errormsg(message)```The final error message is based on an error occuring in the SQL code that was executed. This code will parse the message returned from the ibm_db interface and parse it to return only the error message portion (and not all of the wrapper code from the driver).```db2_error(quiet)```The quiet flag is passed to the db2_error routine so that messages can be suppressed if the user wishes to ignore them with the -q flag. A good example of this is dropping a table that does not exist. We know that an error will be thrown so we can ignore it. The information that the db2_error routine gets is from the stmt_errormsg() function from within the ibm_db driver. The db2_error function should only be called after a SQL failure otherwise there will be no diagnostic information returned from stmt_errormsg().
###Code
def db2_error(quiet):
global sqlerror, sqlcode, sqlstate, _environment
try:
errmsg = ibm_db.stmt_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
sqlerror = errmsg
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
except:
errmsg = "Unknown error."
sqlcode = -99999
sqlstate = "-99999"
sqlerror = errmsg
return
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
if quiet == True: return
if (errmsg == ""): return
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html+errmsg+"</p>"))
else:
print(errmsg)
# Print out an error message
def errormsg(message):
global _environment
if (message != ""):
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + message + "</p>"))
else:
print(message)
def success(message):
if (message != ""):
print(message)
return
def debug(message,error=False):
global _environment
if (_environment["jupyter"] == True):
spacer = "<br>" + " "
else:
spacer = "\n "
if (message != ""):
lines = message.split('\n')
msg = ""
indent = 0
for line in lines:
delta = line.count("(") - line.count(")")
if (msg == ""):
msg = line
indent = indent + delta
else:
if (delta < 0): indent = indent + delta
msg = msg + spacer * (indent*2) + line
if (delta > 0): indent = indent + delta
if (indent < 0): indent = 0
if (error == True):
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
else:
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#008000; background-color:#e6ffe6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + msg + "</pre></p>"))
else:
print(msg)
return
###Output
_____no_output_____
###Markdown
Macro ProcessorA macro is used to generate SQL to be executed by overriding or creating a new keyword. For instance, the base `%sql` command does not understand the `LIST TABLES` command which is usually used in conjunction with the `CLP` processor. Rather than specifically code this in the base `db2.ipynb` file, we can create a macro that can execute this code for us.There are three routines that deal with macros. - checkMacro is used to find the macro calls in a string. All macros are sent to parseMacro for checking.- runMacro will evaluate the macro and return the string to the parse- subvars is used to track the variables used as part of a macro call.- setMacro is used to catalog a macro Set MacroThis code will catalog a macro call.
###Code
def setMacro(inSQL,parms):
global _macros
names = parms.split()
if (len(names) < 2):
errormsg("No command name supplied.")
return None
macroName = names[1].upper()
_macros[macroName] = inSQL
return
###Output
_____no_output_____
###Markdown
Check MacroThis code will check to see if there is a macro command in the SQL. It will take the SQL that is supplied and strip out three values: the first and second keywords, and the remainder of the parameters.For instance, consider the following statement:```CREATE DATABASE GEORGE options....```The name of the macro that we want to run is called `CREATE`. We know that there is a SQL command called `CREATE` but this code will call the macro first to see if needs to run any special code. For instance, `CREATE DATABASE` is not part of the `db2.ipynb` syntax, but we can add it in by using a macro.The check macro logic will strip out the subcommand (`DATABASE`) and place the remainder of the string after `DATABASE` in options.
###Code
def checkMacro(in_sql):
global _macros
if (len(in_sql) == 0): return(in_sql) # Nothing to do
tokens = parseArgs(in_sql,None) # Take the string and reduce into tokens
macro_name = tokens[0].upper() # Uppercase the name of the token
if (macro_name not in _macros):
return(in_sql) # No macro by this name so just return the string
result = runMacro(_macros[macro_name],in_sql,tokens) # Execute the macro using the tokens we found
return(result) # Runmacro will either return the original SQL or the new one
###Output
_____no_output_____
###Markdown
Parse Call ArgumentsThis code will parse a SQL call name(parm1,...) and return the name and the parameters in the call.
###Code
def parseCallArgs(macro):
quoteChar = ""
inQuote = False
inParm = False
name = ""
parms = []
parm = ''
sql = macro
for ch in macro:
if (inParm == False):
if (ch in ["("," ","\n"]):
inParm = True
else:
name = name + ch
else:
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
#if (quoteChar == "]"):
# parm = parm + "'"
else:
parm = parm + ch
elif (ch in ("\"","\'","[")): # Do we have a quote
if (ch == "["):
# parm = parm + "'"
quoteChar = "]"
else:
quoteChar = ch
inQuote = True
elif (ch == ")"):
if (parm != ""):
parm_name, parm_value = splitassign(parm)
parms.append([parm_name,parm_value])
parm = ""
break
elif (ch == ","):
if (parm != ""):
parm_name, parm_value = splitassign(parm)
parms.append([parm_name,parm_value])
else:
parms.append(["null","null"])
parm = ""
else:
parm = parm + ch
if (inParm == True):
if (parm != ""):
parm_name, parm_value = splitassign(parm)
parms.append([parm_name,parm_value])
return(name,parms)
###Output
_____no_output_____
###Markdown
Split AssignmentThis routine will return the name of a variable and it's value when the format is x=y. If y is enclosed in quotes, the quotes are removed.
###Code
def splitassign(arg):
var_name = "null"
var_value = "null"
arg = arg.strip()
eq = arg.find("=")
if (eq != -1):
var_name = arg[:eq].strip()
temp_value = arg[eq+1:].strip()
if (temp_value != ""):
ch = temp_value[0]
if (ch in ["'",'"']):
if (temp_value[-1:] == ch):
var_value = temp_value[1:-1]
else:
var_value = temp_value
else:
var_value = temp_value
else:
var_value = arg
return var_name, var_value
###Output
_____no_output_____
###Markdown
Parse Args The commands that are used in the macros need to be parsed into their separate tokens. The tokens are separated by blanks and strings that enclosed in quotes are kept together.
###Code
def parseArgs(argin,_vars):
quoteChar = ""
inQuote = False
inArg = True
args = []
arg = ''
for ch in argin.lstrip():
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
arg = arg + ch #z
else:
arg = arg + ch
elif (ch == "\"" or ch == "\'"): # Do we have a quote
quoteChar = ch
arg = arg + ch #z
inQuote = True
elif (ch == " "):
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
else:
args.append("null")
arg = ""
else:
arg = arg + ch
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
return(args)
###Output
_____no_output_____
###Markdown
Run MacroThis code will execute the body of the macro and return the results for that macro call.
###Code
def runMacro(script,in_sql,tokens):
result = ""
runIT = True
code = script.split("\n")
level = 0
runlevel = [True,False,False,False,False,False,False,False,False,False]
ifcount = 0
_vars = {}
for i in range(0,len(tokens)):
vstr = str(i)
_vars[vstr] = tokens[i]
if (len(tokens) == 0):
_vars["argc"] = "0"
else:
_vars["argc"] = str(len(tokens)-1)
for line in code:
line = line.strip()
if (line == "" or line == "\n"): continue
if (line[0] == "#"): continue # A comment line starts with a # in the first position of the line
args = parseArgs(line,_vars) # Get all of the arguments
if (args[0] == "if"):
ifcount = ifcount + 1
if (runlevel[level] == False): # You can't execute this statement
continue
level = level + 1
if (len(args) < 4):
print("Macro: Incorrect number of arguments for the if clause.")
return insql
arg1 = args[1]
arg2 = args[3]
if (len(arg2) > 2):
ch1 = arg2[0]
ch2 = arg2[-1:]
if (ch1 in ['"',"'"] and ch1 == ch2):
arg2 = arg2[1:-1].strip()
op = args[2]
if (op in ["=","=="]):
if (arg1 == arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<=","=<"]):
if (arg1 <= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">=","=>"]):
if (arg1 >= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<>","!="]):
if (arg1 != arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<"]):
if (arg1 < arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">"]):
if (arg1 > arg2):
runlevel[level] = True
else:
runlevel[level] = False
else:
print("Macro: Unknown comparison operator in the if statement:" + op)
continue
elif (args[0] in ["exit","echo"] and runlevel[level] == True):
msg = ""
for msgline in args[1:]:
if (msg == ""):
msg = subvars(msgline,_vars)
else:
msg = msg + " " + subvars(msgline,_vars)
if (msg != ""):
if (args[0] == "echo"):
debug(msg,error=False)
else:
debug(msg,error=True)
if (args[0] == "exit"): return ''
elif (args[0] == "pass" and runlevel[level] == True):
pass
elif (args[0] == "var" and runlevel[level] == True):
value = ""
for val in args[2:]:
if (value == ""):
value = subvars(val,_vars)
else:
value = value + " " + subvars(val,_vars)
value.strip()
_vars[args[1]] = value
elif (args[0] == 'else'):
if (ifcount == level):
runlevel[level] = not runlevel[level]
elif (args[0] == 'return' and runlevel[level] == True):
return(result)
elif (args[0] == "endif"):
ifcount = ifcount - 1
if (ifcount < level):
level = level - 1
if (level < 0):
print("Macro: Unmatched if/endif pairs.")
return ''
else:
if (runlevel[level] == True):
if (result == ""):
result = subvars(line,_vars)
else:
result = result + "\n" + subvars(line,_vars)
return(result)
###Output
_____no_output_____
###Markdown
Substitute VarsThis routine is used by the runMacro program to track variables that are used within Macros. These are kept separate from the rest of the code.
###Code
def subvars(script,_vars):
if (_vars == None): return script
remainder = script
result = ""
done = False
while done == False:
bv = remainder.find("{")
if (bv == -1):
done = True
continue
ev = remainder.find("}")
if (ev == -1):
done = True
continue
result = result + remainder[:bv]
vvar = remainder[bv+1:ev]
remainder = remainder[ev+1:]
upper = False
allvars = False
if (vvar[0] == "^"):
upper = True
vvar = vvar[1:]
elif (vvar[0] == "*"):
vvar = vvar[1:]
allvars = True
else:
pass
if (vvar in _vars):
if (upper == True):
items = _vars[vvar].upper()
elif (allvars == True):
try:
iVar = int(vvar)
except:
return(script)
items = ""
sVar = str(iVar)
while sVar in _vars:
if (items == ""):
items = _vars[sVar]
else:
items = items + " " + _vars[sVar]
iVar = iVar + 1
sVar = str(iVar)
else:
items = _vars[vvar]
else:
if (allvars == True):
items = ""
else:
items = "null"
result = result + items
if (remainder != ""):
result = result + remainder
return(result)
###Output
_____no_output_____
###Markdown
SQL TimerThe calling format of this routine is:```count = sqlTimer(hdbc, runtime, inSQL)```This code runs the SQL string multiple times for one second (by default). The accuracy of the clock is not that great when you are running just one statement, so instead this routine will run the code multiple times for a second to give you an execution count. If you need to run the code for more than one second, the runtime value needs to be set to the number of seconds you want the code to run.The return result is always the number of times that the code executed. Note, that the program will skip reading the data if it is a SELECT statement so it doesn't included fetch time for the answer set.
###Code
def sqlTimer(hdbc, runtime, inSQL):
count = 0
t_end = time.time() + runtime
while time.time() < t_end:
try:
stmt = ibm_db.exec_immediate(hdbc,inSQL)
if (stmt == False):
db2_error(flag(["-q","-quiet"]))
return(-1)
ibm_db.free_result(stmt)
except Exception as err:
db2_error(False)
return(-1)
count = count + 1
return(count)
###Output
_____no_output_____
###Markdown
Split ArgsThis routine takes as an argument a string and then splits the arguments according to the following logic:* If the string starts with a `(` character, it will check the last character in the string and see if it is a `)` and then remove those characters* Every parameter is separated by a comma `,` and commas within quotes are ignored* Each parameter returned will have three values returned - one for the value itself, an indicator which will be either True if it was quoted, or False if not, and True or False if it is numeric.Example:``` "abcdef",abcdef,456,"856"```Three values would be returned:```[abcdef,True,False],[abcdef,False,False],[456,False,True],[856,True,False]```Any quoted string will be False for numeric. The way that the parameters are handled are up to the calling program. However, in the case of Db2, the quoted strings must be in single quotes so any quoted parameter using the double quotes `"` must be wrapped with single quotes. There is always a possibility that a string contains single quotes (i.e. O'Connor) so any substituted text should use `''` so that Db2 can properly interpret the string. This routine does not adjust the strings with quotes, and depends on the variable subtitution routine to do that.
###Code
def splitargs(arguments):
import types
# String the string and remove the ( and ) characters if they at the beginning and end of the string
results = []
step1 = arguments.strip()
if (len(step1) == 0): return(results) # Not much to do here - no args found
if (step1[0] == '('):
if (step1[-1:] == ')'):
step2 = step1[1:-1]
step2 = step2.strip()
else:
step2 = step1
else:
step2 = step1
# Now we have a string without brackets. Start scanning for commas
quoteCH = ""
pos = 0
arg = ""
args = []
while pos < len(step2):
ch = step2[pos]
if (quoteCH == ""): # Are we in a quote?
if (ch in ('"',"'")): # Check to see if we are starting a quote
quoteCH = ch
arg = arg + ch
pos += 1
elif (ch == ","): # Are we at the end of a parameter?
arg = arg.strip()
args.append(arg)
arg = ""
inarg = False
pos += 1
else: # Continue collecting the string
arg = arg + ch
pos += 1
else:
if (ch == quoteCH): # Are we at the end of a quote?
arg = arg + ch # Add the quote to the string
pos += 1 # Increment past the quote
quoteCH = "" # Stop quote checking (maybe!)
else:
pos += 1
arg = arg + ch
if (quoteCH != ""): # So we didn't end our string
arg = arg.strip()
args.append(arg)
elif (arg != ""): # Something left over as an argument
arg = arg.strip()
args.append(arg)
else:
pass
results = []
for arg in args:
result = []
if (len(arg) > 0):
if (arg[0] in ('"',"'")):
value = arg[1:-1]
isString = True
isNumber = False
else:
isString = False
isNumber = False
try:
value = eval(arg)
if (type(value) == int):
isNumber = True
elif (isinstance(value,float) == True):
isNumber = True
else:
value = arg
except:
value = arg
else:
value = ""
isString = False
isNumber = False
result = [value,isString,isNumber]
results.append(result)
return results
###Output
_____no_output_____
###Markdown
SQL ParserThe calling format of this routine is:```sql_cmd, parameter_list, encoded_sql = sqlParser(sql_input)```This code will look at the SQL string that has been passed to it and parse it into four values:- sql_cmd: First command in the list (so this may not be the actual SQL command)- parameter_list: the values of the parameters that need to passed to the execute/pandas code- encoded_sql: SQL with the parameters removed if there are any (replaced with ? markers)
###Code
def sqlParser(sqlin,local_ns):
sql_cmd = ""
encoded_sql = sqlin
firstCommand = "(?:^\s*)([a-zA-Z]+)(?:\s+.*|$)"
findFirst = re.match(firstCommand,sqlin)
if (findFirst == None): # We did not find a match so we just return the empty string
return sql_cmd, encoded_sql
cmd = findFirst.group(1)
sql_cmd = cmd.upper()
#
# Scan the input string looking for variables in the format :var. If no : is found just return.
# Var must be alpha+number+_ to be valid
#
if (':' not in sqlin): # A quick check to see if parameters are in here, but not fool-proof!
return sql_cmd, encoded_sql
inVar = False
inQuote = ""
varName = ""
encoded_sql = ""
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
for ch in sqlin:
if (inVar == True): # We are collecting the name of a variable
if (ch.upper() in "@_ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789[]"):
varName = varName + ch
continue
else:
if (varName == ""):
encode_sql = encoded_sql + ":"
elif (varName[0] in ('[',']')):
encoded_sql = encoded_sql + ":" + varName
else:
if (ch == '.'): # If the variable name is stopped by a period, assume no quotes are used
flag_quotes = False
else:
flag_quotes = True
varValue, varType = getContents(varName,flag_quotes,local_ns)
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == RAW):
encoded_sql = encoded_sql + varValue
elif (varType == LIST):
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
flag_quotes = True
try:
if (v.find('0x') == 0): # Just guessing this is a hex value at beginning
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
encoded_sql = encoded_sql + ch
varName = ""
inVar = False
elif (inQuote != ""):
encoded_sql = encoded_sql + ch
if (ch == inQuote): inQuote = ""
elif (ch in ("'",'"')):
encoded_sql = encoded_sql + ch
inQuote = ch
elif (ch == ":"): # This might be a variable
varName = ""
inVar = True
else:
encoded_sql = encoded_sql + ch
if (inVar == True):
varValue, varType = getContents(varName,True,local_ns) # We assume the end of a line is quoted
if (varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == LIST):
flag_quotes = True
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
try:
if (v.find('0x') == 0): # Just guessing this is a hex value
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
return sql_cmd, encoded_sql
###Output
_____no_output_____
###Markdown
Variable Contents FunctionThe calling format of this routine is:```value = getContents(varName,quote,name_space)```This code will take the name of a variable as input and return the contents of that variable. If the variable is not found then the program will return None which is the equivalent to empty or null. Note that this function looks at the global variable pool for Python so it is possible that the wrong version of variable is returned if it is used in different functions. For this reason, any variables used in SQL statements should use a unique namimg convention if possible.The other thing that this function does is replace single quotes with two quotes. The reason for doing this is that Db2 will convert two single quotes into one quote when dealing with strings. This avoids problems when dealing with text that contains multiple quotes within the string. Note that this substitution is done only for single quote characters since the double quote character is used by Db2 for naming columns that are case sensitive or contain special characters.If the quote value is True, the field will have quotes around it. The name_space is the variables currently that are registered in Python.
###Code
def getContents(varName,flag_quotes,local_ns):
#
# Get the contents of the variable name that is passed to the routine. Only simple
# variables are checked, i.e. arrays and lists are not parsed
#
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
DICT = 4
try:
value = eval(varName,None,local_ns) # globals()[varName] # eval(varName)
except:
return(None,STRING)
if (isinstance(value,dict) == True): # Check to see if this is JSON dictionary
return(addquotes(value,flag_quotes),STRING)
elif(isinstance(value,list) == True): # List - tricky
return(value,LIST)
elif (isinstance(value,int) == True): # Integer value
return(value,NUMBER)
elif (isinstance(value,float) == True): # Float value
return(value,NUMBER)
else:
try:
# The pattern needs to be in the first position (0 in Python terms)
if (value.find('0x') == 0): # Just guessing this is a hex value
return(value,RAW)
else:
return(addquotes(value,flag_quotes),STRING) # String
except:
return(addquotes(str(value),flag_quotes),RAW)
###Output
_____no_output_____
###Markdown
Add QuotesQuotes are a challenge when dealing with dictionaries and Db2. Db2 wants strings delimited with single quotes, while Dictionaries use double quotes. That wouldn't be a problems except imbedded single quotes within these dictionaries will cause things to fail. This routine attempts to double-quote the single quotes within the dicitonary.
###Code
def addquotes(inString,flag_quotes):
if (isinstance(inString,dict) == True): # Check to see if this is JSON dictionary
serialized = json.dumps(inString)
else:
serialized = inString
# Replace single quotes with '' (two quotes) and wrap everything in single quotes
if (flag_quotes == False):
return(serialized)
else:
return("'"+serialized.replace("'","''")+"'") # Convert single quotes to two single quotes
###Output
_____no_output_____
###Markdown
Create the SAMPLE Database TablesThe calling format of this routine is:```db2_create_sample(quiet)```There are a lot of examples that depend on the data within the SAMPLE database. If you are running these examples and the connection is not to the SAMPLE database, then this code will create the two (EMPLOYEE, DEPARTMENT) tables that are used by most examples. If the function finds that these tables already exist, then nothing is done. If the tables are missing then they will be created with the same data as in the SAMPLE database.The quiet flag tells the program not to print any messages when the creation of the tables is complete.
###Code
def db2_create_sample(quiet):
create_department = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='DEPARTMENT' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE DEPARTMENT(DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6),ADMRDEPT CHAR(3) NOT NULL)');
EXECUTE IMMEDIATE('INSERT INTO DEPARTMENT VALUES
(''A00'',''SPIFFY COMPUTER SERVICE DIV.'',''000010'',''A00''),
(''B01'',''PLANNING'',''000020'',''A00''),
(''C01'',''INFORMATION CENTER'',''000030'',''A00''),
(''D01'',''DEVELOPMENT CENTER'',NULL,''A00''),
(''D11'',''MANUFACTURING SYSTEMS'',''000060'',''D01''),
(''D21'',''ADMINISTRATION SYSTEMS'',''000070'',''D01''),
(''E01'',''SUPPORT SERVICES'',''000050'',''A00''),
(''E11'',''OPERATIONS'',''000090'',''E01''),
(''E21'',''SOFTWARE SUPPORT'',''000100'',''E01''),
(''F22'',''BRANCH OFFICE F2'',NULL,''E01''),
(''G22'',''BRANCH OFFICE G2'',NULL,''E01''),
(''H22'',''BRANCH OFFICE H2'',NULL,''E01''),
(''I22'',''BRANCH OFFICE I2'',NULL,''E01''),
(''J22'',''BRANCH OFFICE J2'',NULL,''E01'')');
END IF;
END"""
%sql -d -q {create_department}
create_employee = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='EMPLOYEE' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE EMPLOYEE(
EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1),
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3),
PHONENO CHAR(4),
HIREDATE DATE,
JOB CHAR(8),
EDLEVEL SMALLINT NOT NULL,
SEX CHAR(1),
BIRTHDATE DATE,
SALARY DECIMAL(9,2),
BONUS DECIMAL(9,2),
COMM DECIMAL(9,2)
)');
EXECUTE IMMEDIATE('INSERT INTO EMPLOYEE VALUES
(''000010'',''CHRISTINE'',''I'',''HAAS'' ,''A00'',''3978'',''1995-01-01'',''PRES '',18,''F'',''1963-08-24'',152750.00,1000.00,4220.00),
(''000020'',''MICHAEL'' ,''L'',''THOMPSON'' ,''B01'',''3476'',''2003-10-10'',''MANAGER '',18,''M'',''1978-02-02'',94250.00,800.00,3300.00),
(''000030'',''SALLY'' ,''A'',''KWAN'' ,''C01'',''4738'',''2005-04-05'',''MANAGER '',20,''F'',''1971-05-11'',98250.00,800.00,3060.00),
(''000050'',''JOHN'' ,''B'',''GEYER'' ,''E01'',''6789'',''1979-08-17'',''MANAGER '',16,''M'',''1955-09-15'',80175.00,800.00,3214.00),
(''000060'',''IRVING'' ,''F'',''STERN'' ,''D11'',''6423'',''2003-09-14'',''MANAGER '',16,''M'',''1975-07-07'',72250.00,500.00,2580.00),
(''000070'',''EVA'' ,''D'',''PULASKI'' ,''D21'',''7831'',''2005-09-30'',''MANAGER '',16,''F'',''2003-05-26'',96170.00,700.00,2893.00),
(''000090'',''EILEEN'' ,''W'',''HENDERSON'' ,''E11'',''5498'',''2000-08-15'',''MANAGER '',16,''F'',''1971-05-15'',89750.00,600.00,2380.00),
(''000100'',''THEODORE'' ,''Q'',''SPENSER'' ,''E21'',''0972'',''2000-06-19'',''MANAGER '',14,''M'',''1980-12-18'',86150.00,500.00,2092.00),
(''000110'',''VINCENZO'' ,''G'',''LUCCHESSI'' ,''A00'',''3490'',''1988-05-16'',''SALESREP'',19,''M'',''1959-11-05'',66500.00,900.00,3720.00),
(''000120'',''SEAN'' ,'' '',''O`CONNELL'' ,''A00'',''2167'',''1993-12-05'',''CLERK '',14,''M'',''1972-10-18'',49250.00,600.00,2340.00),
(''000130'',''DELORES'' ,''M'',''QUINTANA'' ,''C01'',''4578'',''2001-07-28'',''ANALYST '',16,''F'',''1955-09-15'',73800.00,500.00,1904.00),
(''000140'',''HEATHER'' ,''A'',''NICHOLLS'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''000150'',''BRUCE'' ,'' '',''ADAMSON'' ,''D11'',''4510'',''2002-02-12'',''DESIGNER'',16,''M'',''1977-05-17'',55280.00,500.00,2022.00),
(''000160'',''ELIZABETH'',''R'',''PIANKA'' ,''D11'',''3782'',''2006-10-11'',''DESIGNER'',17,''F'',''1980-04-12'',62250.00,400.00,1780.00),
(''000170'',''MASATOSHI'',''J'',''YOSHIMURA'' ,''D11'',''2890'',''1999-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',44680.00,500.00,1974.00),
(''000180'',''MARILYN'' ,''S'',''SCOUTTEN'' ,''D11'',''1682'',''2003-07-07'',''DESIGNER'',17,''F'',''1979-02-21'',51340.00,500.00,1707.00),
(''000190'',''JAMES'' ,''H'',''WALKER'' ,''D11'',''2986'',''2004-07-26'',''DESIGNER'',16,''M'',''1982-06-25'',50450.00,400.00,1636.00),
(''000200'',''DAVID'' ,'' '',''BROWN'' ,''D11'',''4501'',''2002-03-03'',''DESIGNER'',16,''M'',''1971-05-29'',57740.00,600.00,2217.00),
(''000210'',''WILLIAM'' ,''T'',''JONES'' ,''D11'',''0942'',''1998-04-11'',''DESIGNER'',17,''M'',''2003-02-23'',68270.00,400.00,1462.00),
(''000220'',''JENNIFER'' ,''K'',''LUTZ'' ,''D11'',''0672'',''1998-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',49840.00,600.00,2387.00),
(''000230'',''JAMES'' ,''J'',''JEFFERSON'' ,''D21'',''2094'',''1996-11-21'',''CLERK '',14,''M'',''1980-05-30'',42180.00,400.00,1774.00),
(''000240'',''SALVATORE'',''M'',''MARINO'' ,''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''2002-03-31'',48760.00,600.00,2301.00),
(''000250'',''DANIEL'' ,''S'',''SMITH'' ,''D21'',''0961'',''1999-10-30'',''CLERK '',15,''M'',''1969-11-12'',49180.00,400.00,1534.00),
(''000260'',''SYBIL'' ,''P'',''JOHNSON'' ,''D21'',''8953'',''2005-09-11'',''CLERK '',16,''F'',''1976-10-05'',47250.00,300.00,1380.00),
(''000270'',''MARIA'' ,''L'',''PEREZ'' ,''D21'',''9001'',''2006-09-30'',''CLERK '',15,''F'',''2003-05-26'',37380.00,500.00,2190.00),
(''000280'',''ETHEL'' ,''R'',''SCHNEIDER'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1976-03-28'',36250.00,500.00,2100.00),
(''000290'',''JOHN'' ,''R'',''PARKER'' ,''E11'',''4502'',''2006-05-30'',''OPERATOR'',12,''M'',''1985-07-09'',35340.00,300.00,1227.00),
(''000300'',''PHILIP'' ,''X'',''SMITH'' ,''E11'',''2095'',''2002-06-19'',''OPERATOR'',14,''M'',''1976-10-27'',37750.00,400.00,1420.00),
(''000310'',''MAUDE'' ,''F'',''SETRIGHT'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''000320'',''RAMLAL'' ,''V'',''MEHTA'' ,''E21'',''9990'',''1995-07-07'',''FIELDREP'',16,''M'',''1962-08-11'',39950.00,400.00,1596.00),
(''000330'',''WING'' ,'' '',''LEE'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''M'',''1971-07-18'',45370.00,500.00,2030.00),
(''000340'',''JASON'' ,''R'',''GOUNOT'' ,''E21'',''5698'',''1977-05-05'',''FIELDREP'',16,''M'',''1956-05-17'',43840.00,500.00,1907.00),
(''200010'',''DIAN'' ,''J'',''HEMMINGER'' ,''A00'',''3978'',''1995-01-01'',''SALESREP'',18,''F'',''1973-08-14'',46500.00,1000.00,4220.00),
(''200120'',''GREG'' ,'' '',''ORLANDO'' ,''A00'',''2167'',''2002-05-05'',''CLERK '',14,''M'',''1972-10-18'',39250.00,600.00,2340.00),
(''200140'',''KIM'' ,''N'',''NATZ'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''200170'',''KIYOSHI'' ,'' '',''YAMAMOTO'' ,''D11'',''2890'',''2005-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',64680.00,500.00,1974.00),
(''200220'',''REBA'' ,''K'',''JOHN'' ,''D11'',''0672'',''2005-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',69840.00,600.00,2387.00),
(''200240'',''ROBERT'' ,''M'',''MONTEVERDE'',''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''1984-03-31'',37760.00,600.00,2301.00),
(''200280'',''EILEEN'' ,''R'',''SCHWARTZ'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1966-03-28'',46250.00,500.00,2100.00),
(''200310'',''MICHELLE'' ,''F'',''SPRINGER'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''200330'',''HELENA'' ,'' '',''WONG'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''F'',''1971-07-18'',35370.00,500.00,2030.00),
(''200340'',''ROY'' ,''R'',''ALONZO'' ,''E21'',''5698'',''1997-07-05'',''FIELDREP'',16,''M'',''1956-05-17'',31840.00,500.00,1907.00)');
END IF;
END"""
%sql -d -q {create_employee}
if (quiet == False): success("Sample tables [EMPLOYEE, DEPARTMENT] created.")
###Output
_____no_output_____
###Markdown
Check optionThis function will return the original string with the option removed, and a flag or true or false of the value is found.```args, flag = checkOption(option_string, option, false_value, true_value)```Options are specified with a -x where x is the character that we are searching for. It may actually be more than one character long like -pb/-pi/etc... The false and true values are optional. By default these are the boolean values of T/F but for some options it could be a character string like ';' versus '@' for delimiters.
###Code
def checkOption(args_in, option, vFalse=False, vTrue=True):
args_out = args_in.strip()
found = vFalse
if (args_out != ""):
if (args_out.find(option) >= 0):
args_out = args_out.replace(option," ")
args_out = args_out.strip()
found = vTrue
return args_out, found
###Output
_____no_output_____
###Markdown
Plot DataThis function will plot the data that is returned from the answer set. The plot value determines how we display the data. 1=Bar, 2=Pie, 3=Line, 4=Interactive.```plotData(flag_plot, hdbi, sql, parms)```The hdbi is the ibm_db_sa handle that is used by pandas dataframes to run the sql. The parms contains any of the parameters required to run the query.
###Code
def plotData(hdbi, sql):
try:
df = pandas.read_sql(sql,hdbi)
except Exception as err:
db2_error(False)
return
if df.empty:
errormsg("No results returned")
return
col_count = len(df.columns)
if flag(["-pb","-bar"]): # Plot 1 = bar chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='bar');
_ = plt.plot();
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
df.plot(kind='bar',x=xlabel,y=ylabel);
_ = plt.plot();
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot.bar();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pp","-pie"]): # Plot 2 = pie chart
if (col_count in (1,2)):
if (col_count == 1):
df.index = df.index + 1
yname = df.columns.values[0]
_ = df.plot(kind='pie',y=yname);
else:
xlabel = df.columns.values[0]
xname = df[xlabel].tolist()
yname = df.columns.values[1]
_ = df.plot(kind='pie',y=yname,labels=xname);
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pl","-line"]): # Plot 3 = line chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='line');
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
_ = df.plot(kind='line',x=xlabel,y=ylabel) ;
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot();
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
else:
return
###Output
_____no_output_____
###Markdown
Find a ProcedureThis routine will check to see if a procedure exists with the SCHEMA/NAME (or just NAME if no schema is supplied) and returns the number of answer sets returned. Possible values are 0, 1 (or greater) or None. If None is returned then we can't find the procedure anywhere.
###Code
def findProc(procname):
global _hdbc, _hdbi, _connected, _runtime
# Split the procedure name into schema.procname if appropriate
upper_procname = procname.upper()
schema, proc = split_string(upper_procname,".") # Expect schema.procname
if (proc == None):
proc = schema
# Call ibm_db.procedures to see if the procedure does exist
schema = "%"
try:
stmt = ibm_db.procedures(_hdbc, None, schema, proc)
if (stmt == False): # Error executing the code
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
result = ibm_db.fetch_tuple(stmt)
resultsets = result[5]
if (resultsets >= 1): resultsets = 1
return resultsets
except Exception as err:
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
###Output
_____no_output_____
###Markdown
Get ColumnsGiven a statement handle, determine what the column names are or the data types.
###Code
def getColumns(stmt):
columns = []
types = []
colcount = 0
try:
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
while (colname != False):
columns.append(colname)
types.append(coltype)
colcount += 1
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
return columns,types
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Call a ProcedureThe CALL statement is used for execution of a stored procedure. The format of the CALL statement is:```CALL PROC_NAME(x,y,z,...)```Procedures allow for the return of answer sets (cursors) as well as changing the contents of the parameters being passed to the procedure. In this implementation, the CALL function is limited to returning one answer set (or nothing). If you want to use more complex stored procedures then you will have to use the native python libraries.
###Code
def parseCall(hdbc, inSQL):
global _hdbc, _hdbi, _connected, _runtime, _environment
# Check to see if we are connected first
if (_connected == False): # Check if you are connected
db2_doConnect()
if _connected == False: return None
remainder = inSQL.strip()
procName, procArgs = parseCallArgs(remainder[5:]) # Assume that CALL ... is the format
resultsets = findProc(procName)
if (resultsets == None): return None
argvalues = []
if (len(procArgs) > 0): # We have arguments to consider
for arg in procArgs:
varname = arg[1]
if (len(varname) > 0):
if (varname[0] == ":"):
checkvar = varname[1:]
varvalue = getContents(checkvar)
if (varvalue == None):
errormsg("Variable " + checkvar + " is not defined.")
return None
argvalues.append(varvalue)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
try:
if (len(procArgs) > 0):
argtuple = tuple(argvalues)
result = ibm_db.callproc(_hdbc,procName,argtuple)
stmt = result[0]
else:
result = ibm_db.callproc(_hdbc,procName)
stmt = result
if (resultsets == 1 and stmt != None):
columns, types = getColumns(stmt)
if (columns == None): return None
rows = []
rowlist = ibm_db.fetch_tuple(stmt)
while ( rowlist ) :
row = []
colcount = 0
for col in rowlist:
try:
if (types[colcount] in ["int","bigint"]):
row.append(int(col))
elif (types[colcount] in ["decimal","real"]):
row.append(float(col))
elif (types[colcount] in ["date","time","timestamp"]):
row.append(str(col))
else:
row.append(col)
except:
row.append(col)
colcount += 1
rows.append(row)
rowlist = ibm_db.fetch_tuple(stmt)
if flag(["-r","-array"]):
rows.insert(0,columns)
if len(procArgs) > 0:
allresults = []
allresults.append(rows)
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return rows
else:
df = pandas.DataFrame.from_records(rows,columns=columns)
if flag("-grid") or _settings['display'] == 'GRID':
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
return df
else:
if len(procArgs) > 0:
allresults = []
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return None
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Parse Prepare/ExecuteThe PREPARE statement is used for repeated execution of a SQL statement. The PREPARE statement has the format:```stmt = PREPARE SELECT EMPNO FROM EMPLOYEE WHERE WORKDEPT=? AND SALARY<?```The SQL statement that you want executed is placed after the PREPARE statement with the location of variables marked with ? (parameter) markers. The variable stmt contains the prepared statement that need to be passed to the EXECUTE statement. The EXECUTE statement has the format:```EXECUTE :x USING z, y, s ```The first variable (:x) is the name of the variable that you assigned the results of the prepare statement. The values after the USING clause are substituted into the prepare statement where the ? markers are found. If the values in USING clause are variable names (z, y, s), a **link** is created to these variables as part of the execute statement. If you use the variable subsitution form of variable name (:z, :y, :s), the **contents** of the variable are placed into the USING clause. Normally this would not make much of a difference except when you are dealing with binary strings or JSON strings where the quote characters may cause some problems when subsituted into the statement.
###Code
def parsePExec(hdbc, inSQL):
import ibm_db
global _stmt, _stmtID, _stmtSQL, sqlcode
cParms = inSQL.split()
parmCount = len(cParms)
if (parmCount == 0): return(None) # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "PREPARE"): # Prepare the following SQL
uSQL = inSQL.upper()
found = uSQL.find("PREPARE")
sql = inSQL[found+7:].strip()
try:
pattern = "\?\*[0-9]+"
findparm = re.search(pattern,sql)
while findparm != None:
found = findparm.group(0)
count = int(found[2:])
markers = ('?,' * count)[:-1]
sql = sql.replace(found,markers)
findparm = re.search(pattern,sql)
stmt = ibm_db.prepare(hdbc,sql) # Check error code here
if (stmt == False):
db2_error(False)
return(False)
stmttext = str(stmt).strip()
stmtID = stmttext[33:48].strip()
if (stmtID in _stmtID) == False:
_stmt.append(stmt) # Prepare and return STMT to caller
_stmtID.append(stmtID)
else:
stmtIX = _stmtID.index(stmtID)
_stmt[stmtiX] = stmt
return(stmtID)
except Exception as err:
print(err)
db2_error(False)
return(False)
if (keyword == "EXECUTE"): # Execute the prepare statement
if (parmCount < 2): return(False) # No stmtID available
stmtID = cParms[1].strip()
if (stmtID in _stmtID) == False:
errormsg("Prepared statement not found or invalid.")
return(False)
stmtIX = _stmtID.index(stmtID)
stmt = _stmt[stmtIX]
try:
if (parmCount == 2): # Only the statement handle available
result = ibm_db.execute(stmt) # Run it
elif (parmCount == 3): # Not quite enough arguments
errormsg("Missing or invalid USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
else:
using = cParms[2].upper()
if (using != "USING"): # Bad syntax again
errormsg("Missing USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
uSQL = inSQL.upper()
found = uSQL.find("USING")
parmString = inSQL[found+5:].strip()
parmset = splitargs(parmString)
if (len(parmset) == 0):
errormsg("Missing parameters after the USING clause.")
sqlcode = -99999
return(False)
parms = []
parm_count = 0
CONSTANT = 0
VARIABLE = 1
const = [0]
const_cnt = 0
for v in parmset:
parm_count = parm_count + 1
if (v[1] == True or v[2] == True): # v[1] true if string, v[2] true if num
parm_type = CONSTANT
const_cnt = const_cnt + 1
if (v[2] == True):
if (isinstance(v[0],int) == True): # Integer value
sql_type = ibm_db.SQL_INTEGER
elif (isinstance(v[0],float) == True): # Float value
sql_type = ibm_db.SQL_DOUBLE
else:
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
const.append(v[0])
else:
parm_type = VARIABLE
# See if the variable has a type associated with it varname@type
varset = v[0].split("@")
parm_name = varset[0]
parm_datatype = "char"
# Does the variable exist?
if (parm_name not in globals()):
errormsg("SQL Execute parameter " + parm_name + " not found")
sqlcode = -99999
return(false)
if (len(varset) > 1): # Type provided
parm_datatype = varset[1]
if (parm_datatype == "dec" or parm_datatype == "decimal"):
sql_type = ibm_db.SQL_DOUBLE
elif (parm_datatype == "bin" or parm_datatype == "binary"):
sql_type = ibm_db.SQL_BINARY
elif (parm_datatype == "int" or parm_datatype == "integer"):
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
try:
if (parm_type == VARIABLE):
result = ibm_db.bind_param(stmt, parm_count, globals()[parm_name], ibm_db.SQL_PARAM_INPUT, sql_type)
else:
result = ibm_db.bind_param(stmt, parm_count, const[const_cnt], ibm_db.SQL_PARAM_INPUT, sql_type)
except:
result = False
if (result == False):
errormsg("SQL Bind on variable " + parm_name + " failed.")
sqlcode = -99999
return(false)
result = ibm_db.execute(stmt) # ,tuple(parms))
if (result == False):
errormsg("SQL Execute failed.")
return(False)
if (ibm_db.num_fields(stmt) == 0): return(True) # Command successfully completed
return(fetchResults(stmt))
except Exception as err:
db2_error(False)
return(False)
return(False)
return(False)
###Output
_____no_output_____
###Markdown
Fetch Result SetThis code will take the stmt handle and then produce a result set of rows as either an array (`-r`,`-array`) or as an array of json records (`-json`).
###Code
def fetchResults(stmt):
global sqlcode
rows = []
columns, types = getColumns(stmt)
# By default we assume that the data will be an array
is_array = True
# Check what type of data we want returned - array or json
if (flag(["-r","-array"]) == False):
# See if we want it in JSON format, if not it remains as an array
if (flag("-json") == True):
is_array = False
# Set column names to lowercase for JSON records
if (is_array == False):
columns = [col.lower() for col in columns] # Convert to lowercase for each of access
# First row of an array has the column names in it
if (is_array == True):
rows.append(columns)
result = ibm_db.fetch_tuple(stmt)
rowcount = 0
while (result):
rowcount += 1
if (is_array == True):
row = []
else:
row = {}
colcount = 0
for col in result:
try:
if (types[colcount] in ["int","bigint"]):
if (is_array == True):
row.append(int(col))
else:
row[columns[colcount]] = int(col)
elif (types[colcount] in ["decimal","real"]):
if (is_array == True):
row.append(float(col))
else:
row[columns[colcount]] = float(col)
elif (types[colcount] in ["date","time","timestamp"]):
if (is_array == True):
row.append(str(col))
else:
row[columns[colcount]] = str(col)
else:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
except:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
colcount += 1
rows.append(row)
result = ibm_db.fetch_tuple(stmt)
if (rowcount == 0):
sqlcode = 100
else:
sqlcode = 0
return rows
###Output
_____no_output_____
###Markdown
Parse CommitThere are three possible COMMIT verbs that can bs used:- COMMIT [WORK] - Commit the work in progress - The WORK keyword is not checked for- ROLLBACK - Roll back the unit of work- AUTOCOMMIT ON/OFF - Are statements committed on or off?The statement is passed to this routine and then checked.
###Code
def parseCommit(sql):
global _hdbc, _hdbi, _connected, _runtime, _stmt, _stmtID, _stmtSQL
if (_connected == False): return # Nothing to do if we are not connected
cParms = sql.split()
if (len(cParms) == 0): return # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "COMMIT"): # Commit the work that was done
try:
result = ibm_db.commit (_hdbc) # Commit the connection
if (len(cParms) > 1):
keyword = cParms[1].upper()
if (keyword == "HOLD"):
return
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "ROLLBACK"): # Rollback the work that was done
try:
result = ibm_db.rollback(_hdbc) # Rollback the connection
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "AUTOCOMMIT"): # Is autocommit on or off
if (len(cParms) > 1):
op = cParms[1].upper() # Need ON or OFF value
else:
return
try:
if (op == "OFF"):
ibm_db.autocommit(_hdbc, False)
elif (op == "ON"):
ibm_db.autocommit (_hdbc, True)
return
except Exception as err:
db2_error(False)
return
return
###Output
_____no_output_____
###Markdown
Set FlagsThis code will take the input SQL block and update the global flag list. The global flag list is just a list of options that are set at the beginning of a code block. The absence of a flag means it is false. If it exists it is true.
###Code
def setFlags(inSQL):
global _flags
_flags = [] # Delete all of the current flag settings
pos = 0
end = len(inSQL)-1
inFlag = False
ignore = False
outSQL = ""
flag = ""
while (pos <= end):
ch = inSQL[pos]
if (ignore == True):
outSQL = outSQL + ch
else:
if (inFlag == True):
if (ch != " "):
flag = flag + ch
else:
_flags.append(flag)
inFlag = False
else:
if (ch == "-"):
flag = "-"
inFlag = True
elif (ch == ' '):
outSQL = outSQL + ch
else:
outSQL = outSQL + ch
ignore = True
pos += 1
if (inFlag == True):
_flags.append(flag)
return outSQL
###Output
_____no_output_____
###Markdown
Check to see if flag ExistsThis function determines whether or not a flag exists in the global flag array. Absence of a value means it is false. The parameter can be a single value, or an array of values.
###Code
def flag(inflag):
global _flags
if isinstance(inflag,list):
for x in inflag:
if (x in _flags):
return True
return False
else:
if (inflag in _flags):
return True
else:
return False
###Output
_____no_output_____
###Markdown
Generate a list of SQL lines based on a delimiterNote that this function will make sure that quotes are properly maintained so that delimiters inside of quoted strings do not cause errors.
###Code
def splitSQL(inputString, delimiter):
pos = 0
arg = ""
results = []
quoteCH = ""
inSQL = inputString.strip()
if (len(inSQL) == 0): return(results) # Not much to do here - no args found
while pos < len(inSQL):
ch = inSQL[pos]
pos += 1
if (ch in ('"',"'")): # Is this a quote characters?
arg = arg + ch # Keep appending the characters to the current arg
if (ch == quoteCH): # Is this quote character we are in
quoteCH = ""
elif (quoteCH == ""): # Create the quote
quoteCH = ch
else:
None
elif (quoteCH != ""): # Still in a quote
arg = arg + ch
elif (ch == delimiter): # Is there a delimiter?
results.append(arg)
arg = ""
else:
arg = arg + ch
if (arg != ""):
results.append(arg)
return(results)
###Output
_____no_output_____
###Markdown
Main %sql Magic DefinitionThe main %sql Magic logic is found in this section of code. This code will register the Magic command and allow Jupyter notebooks to interact with Db2 by using this extension.
###Code
@magics_class
class DB2(Magics):
@needs_local_scope
@line_cell_magic
def sql(self, line, cell=None, local_ns=None):
# Before we event get started, check to see if you have connected yet. Without a connection we
# can't do anything. You may have a connection request in the code, so if that is true, we run those,
# otherwise we connect immediately
# If your statement is not a connect, and you haven't connected, we need to do it for you
global _settings, _environment
global _hdbc, _hdbi, _connected, _runtime, sqlstate, sqlerror, sqlcode, sqlelapsed
# If you use %sql (line) we just run the SQL. If you use %%SQL the entire cell is run.
flag_cell = False
flag_output = False
sqlstate = "0"
sqlerror = ""
sqlcode = 0
sqlelapsed = 0
start_time = time.time()
end_time = time.time()
# Macros gets expanded before anything is done
SQL1 = setFlags(line.strip())
SQL1 = checkMacro(SQL1) # Update the SQL if any macros are in there
SQL2 = cell
if flag("-sampledata"): # Check if you only want sample data loaded
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
db2_create_sample(flag(["-q","-quiet"]))
return
if SQL1 == "?" or flag(["-h","-help"]): # Are you asking for help
sqlhelp()
return
if len(SQL1) == 0 and SQL2 == None: return # Nothing to do here
# Check for help
if SQL1.upper() == "? CONNECT": # Are you asking for help on CONNECT
connected_help()
return
sqlType,remainder = sqlParser(SQL1,local_ns) # What type of command do you have?
if (sqlType == "CONNECT"): # A connect request
parseConnect(SQL1,local_ns)
return
elif (sqlType == "DEFINE"): # Create a macro from the body
result = setMacro(SQL2,remainder)
return
elif (sqlType == "OPTION"):
setOptions(SQL1)
return
elif (sqlType == 'COMMIT' or sqlType == 'ROLLBACK' or sqlType == 'AUTOCOMMIT'):
parseCommit(remainder)
return
elif (sqlType == "PREPARE"):
pstmt = parsePExec(_hdbc, remainder)
return(pstmt)
elif (sqlType == "EXECUTE"):
result = parsePExec(_hdbc, remainder)
return(result)
elif (sqlType == "CALL"):
result = parseCall(_hdbc, remainder)
return(result)
else:
pass
sql = SQL1
if (sql == ""): sql = SQL2
if (sql == ""): return # Nothing to do here
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
if _settings["maxrows"] == -1: # Set the return result size
pandas.reset_option('display.max_rows')
else:
pandas.options.display.max_rows = _settings["maxrows"]
runSQL = re.sub('.*?--.*$',"",sql,flags=re.M)
remainder = runSQL.replace("\n"," ")
if flag(["-d","-delim"]):
sqlLines = splitSQL(remainder,"@")
else:
sqlLines = splitSQL(remainder,";")
flag_cell = True
# For each line figure out if you run it as a command (db2) or select (sql)
for sqlin in sqlLines: # Run each command
sqlin = checkMacro(sqlin) # Update based on any macros
sqlType, sql = sqlParser(sqlin,local_ns) # Parse the SQL
if (sql.strip() == ""): continue
if flag(["-e","-echo"]): debug(sql,False)
if flag("-t"):
cnt = sqlTimer(_hdbc, _settings["runtime"], sql) # Given the sql and parameters, clock the time
if (cnt >= 0): print("Total iterations in %s second(s): %s" % (_settings["runtime"],cnt))
return(cnt)
elif flag(["-pb","-bar","-pp","-pie","-pl","-line"]): # We are plotting some results
plotData(_hdbi, sql) # Plot the data and return
return
else:
try: # See if we have an answer set
stmt = ibm_db.prepare(_hdbc,sql)
if (ibm_db.num_fields(stmt) == 0): # No, so we just execute the code
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
continue
rowcount = ibm_db.num_rows(stmt)
if (rowcount == 0 and flag(["-q","-quiet"]) == False):
errormsg("No rows found.")
continue # Continue running
elif flag(["-r","-array","-j","-json"]): # raw, json, format json
row_count = 0
resultSet = []
try:
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
return
if flag("-j"): # JSON single output
row_count = 0
json_results = []
while( ibm_db.fetch_row(stmt) ):
row_count = row_count + 1
jsonVal = ibm_db.result(stmt,0)
jsonDict = json.loads(jsonVal)
json_results.append(jsonDict)
flag_output = True
if (row_count == 0): sqlcode = 100
return(json_results)
else:
return(fetchResults(stmt))
except Exception as err:
db2_error(flag(["-q","-quiet"]))
return
else:
try:
df = pandas.read_sql(sql,_hdbi)
except Exception as err:
db2_error(False)
return
if (len(df) == 0):
sqlcode = 100
if (flag(["-q","-quiet"]) == False):
errormsg("No rows found")
continue
flag_output = True
if flag("-grid") or _settings['display'] == 'GRID': # Check to see if we can display the results
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
print(df.to_string())
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
pandas.options.display.max_rows = None
pandas.options.display.max_columns = None
return df # print(df.to_string())
else:
pandas.options.display.max_rows = _settings["maxrows"]
pandas.options.display.max_columns = None
return df # pdisplay(df) # print(df.to_string())
except:
db2_error(flag(["-q","-quiet"]))
continue # return
end_time = time.time()
sqlelapsed = end_time - start_time
if (flag_output == False and flag(["-q","-quiet"]) == False): print("Command completed.")
# Register the Magic extension in Jupyter
ip = get_ipython()
ip.register_magics(DB2)
load_settings()
success("Db2 Extensions Loaded.")
###Output
_____no_output_____
###Markdown
Pre-defined MacrosThese macros are used to simulate the LIST TABLES and DESCRIBE commands that are available from within the Db2 command line.
###Code
%%sql define LIST
#
# The LIST macro is used to list all of the tables in the current schema or for all schemas
#
var syntax Syntax: LIST TABLES [FOR ALL | FOR SCHEMA name]
#
# Only LIST TABLES is supported by this macro
#
if {^1} <> 'TABLES'
exit {syntax}
endif
#
# This SQL is a temporary table that contains the description of the different table types
#
WITH TYPES(TYPE,DESCRIPTION) AS (
VALUES
('A','Alias'),
('G','Created temporary table'),
('H','Hierarchy table'),
('L','Detached table'),
('N','Nickname'),
('S','Materialized query table'),
('T','Table'),
('U','Typed table'),
('V','View'),
('W','Typed view')
)
SELECT TABNAME, TABSCHEMA, T.DESCRIPTION FROM SYSCAT.TABLES S, TYPES T
WHERE T.TYPE = S.TYPE
#
# Case 1: No arguments - LIST TABLES
#
if {argc} == 1
AND OWNER = CURRENT USER
ORDER BY TABNAME, TABSCHEMA
return
endif
#
# Case 2: Need 3 arguments - LIST TABLES FOR ALL
#
if {argc} == 3
if {^2}&{^3} == 'FOR&ALL'
ORDER BY TABNAME, TABSCHEMA
return
endif
exit {syntax}
endif
#
# Case 3: Need FOR SCHEMA something here
#
if {argc} == 4
if {^2}&{^3} == 'FOR&SCHEMA'
AND TABSCHEMA = '{^4}'
ORDER BY TABNAME, TABSCHEMA
return
else
exit {syntax}
endif
endif
#
# Nothing matched - Error
#
exit {syntax}
%%sql define describe
#
# The DESCRIBE command can either use the syntax DESCRIBE TABLE <name> or DESCRIBE TABLE SELECT ...
#
var syntax Syntax: DESCRIBE [TABLE name | SELECT statement]
#
# Check to see what count of variables is... Must be at least 2 items DESCRIBE TABLE x or SELECT x
#
if {argc} < 2
exit {syntax}
endif
CALL ADMIN_CMD('{*0}');
###Output
_____no_output_____
###Markdown
Set the table formatting to left align a table in a cell. By default, tables are centered in a cell. Remove this cell if you don't want to change Jupyter notebook formatting for tables. In addition, we skip this code if you are running in a shell environment rather than a Jupyter notebook
###Code
#%%html
#<style>
# table {margin-left: 0 !important; text-align: left;}
#</style>
###Output
_____no_output_____
###Markdown
DB2 Jupyter Notebook ExtensionsVersion: 2021-08-23 This code is imported as a Jupyter notebook extension in any notebooks you create with DB2 code in it. Place the following line of code in any notebook that you want to use these commands with:&37;run db2.ipynbThis code defines a Jupyter/Python magic command called `%sql` which allows you to execute DB2 specific calls to the database. There are other packages available for manipulating databases, but this one has been specificallydesigned for demonstrating a number of the SQL features available in DB2.There are two ways of executing the `%sql` command. A single line SQL statement would use theline format of the magic command:%sql SELECT * FROM EMPLOYEEIf you have a large block of sql then you would place the %%sql command at the beginning of the block and thenplace the SQL statements into the remainder of the block. Using this form of the `%%sql` statement means that thenotebook cell can only contain SQL and no other statements.%%sqlSELECT * FROM EMPLOYEEORDER BY LASTNAMEYou can have multiple lines in the SQL block (`%%sql`). The default SQL delimiter is the semi-column (`;`).If you have scripts (triggers, procedures, functions) that use the semi-colon as part of the script, you will need to use the `-d` option to change the delimiter to an at "`@`" sign. %%sql -dSELECT * FROM EMPLOYEE@CREATE PROCEDURE ...@The `%sql` command allows most DB2 commands to execute and has a special version of the CONNECT statement. A CONNECT by itself will attempt to reconnect to the database using previously used settings. If it cannot connect, it will prompt the user for additional information. The CONNECT command has the following format:%sql CONNECT TO <database> USER <userid> USING <password | ?> HOST <ip address> PORT <port number>If you use a "`?`" for the password field, the system will prompt you for a password. This avoids typing the password as clear text on the screen. If a connection is not successful, the system will print the errormessage associated with the connect request.If the connection is successful, the parameters are saved on your system and will be used the next time yourun a SQL statement, or when you issue the %sql CONNECT command with no parameters. In addition to the -d option, there are a number different options that you can specify at the beginning of the SQL: - `-d, -delim` - Change SQL delimiter to "`@`" from "`;`" - `-q, -quiet` - Quiet results - no messages returned from the function - `-r, -array` - Return the result set as an array of values instead of a dataframe - `-t, -time` - Time the following SQL statement and return the number of times it executes in 1 second - `-j` - Format the first character column of the result set as a JSON record - `-json` - Return result set as an array of json records - `-a, -all` - Return all rows in answer set and do not limit display - `-grid` - Display the results in a scrollable grid - `-pb, -bar` - Plot the results as a bar chart - `-pl, -line` - Plot the results as a line chart - `-pp, -pie` - Plot the results as a pie chart - `-e, -echo` - Any macro expansions are displayed in an output box - `-sampledata` - Create and load the EMPLOYEE and DEPARTMENT tablesYou can pass python variables to the `%sql` command by using the `{}` braces with the name of thevariable inbetween. Note that you will need to place proper punctuation around the variable in the event theSQL command requires it. For instance, the following example will find employee '000010' in the EMPLOYEE table.empno = '000010'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO='{empno}'The other option is to use parameter markers. What you would need to do is use the name of the variable with a colon in front of it and the program will prepare the statement and then pass the variable to Db2 when the statement is executed. This allows you to create complex strings that might contain quote characters and other special characters and not have to worry about enclosing the string with the correct quotes. Note that you do not place the quotes around the variable even though it is a string.empno = '000020'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=:empno Development SQLThe previous set of `%sql` and `%%sql` commands deals with SQL statements and commands that are run in an interactive manner. There is a class of SQL commands that are more suited to a development environment where code is iterated or requires changing input. The commands that are associated with this form of SQL are:- AUTOCOMMIT- COMMIT/ROLLBACK- PREPARE - EXECUTEAutocommit is the default manner in which SQL statements are executed. At the end of the successful completion of a statement, the results are commited to the database. There is no concept of a transaction where multiple DML/DDL statements are considered one transaction. The `AUTOCOMMIT` command allows you to turn autocommit `OFF` or `ON`. This means that the set of SQL commands run after the `AUTOCOMMIT OFF` command are executed are not commited to the database until a `COMMIT` or `ROLLBACK` command is issued.`COMMIT` (`WORK`) will finalize all of the transactions (`COMMIT`) to the database and `ROLLBACK` will undo all of the changes. If you issue a `SELECT` statement during the execution of your block, the results will reflect all of your changes. If you `ROLLBACK` the transaction, the changes will be lost.`PREPARE` is typically used in a situation where you want to repeatidly execute a SQL statement with different variables without incurring the SQL compilation overhead. For instance:```x = %sql PREPARE SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=?for y in ['000010','000020','000030']: %sql execute :x using :y````EXECUTE` is used to execute a previously compiled statement. To retrieve the error codes that might be associated with any SQL call, the following variables are updated after every call:* SQLCODE* SQLSTATE* SQLERROR - Full error message retrieved from Db2 Install Db2 Python DriverIf the ibm_db driver is not installed on your system, the subsequent Db2 commands will fail. In order to install the Db2 driver, issue the following command from a Jupyter notebook cell:```!pip install --user ibm_db``` Db2 Jupyter ExtensionsThis section of code has the import statements and global variables defined for the remainder of the functions.
###Code
#
# Set up Jupyter MAGIC commands "sql".
# %sql will return results from a DB2 select statement or execute a DB2 command
#
# IBM 2021: George Baklarz
# Version 2021-07-13
#
from __future__ import print_function
from IPython.display import HTML as pHTML, Image as pImage, display as pdisplay, Javascript as Javascript
from IPython.core.magic import (Magics, magics_class, line_magic,
cell_magic, line_cell_magic, needs_local_scope)
import ibm_db
import pandas
import ibm_db_dbi
import json
import matplotlib
import matplotlib.pyplot as plt
import getpass
import os
import pickle
import time
import sys
import re
import warnings
warnings.filterwarnings("ignore")
# Python Hack for Input between 2 and 3
try:
input = raw_input
except NameError:
pass
_settings = {
"maxrows" : 10,
"maxgrid" : 5,
"runtime" : 1,
"display" : "PANDAS",
"database" : "",
"hostname" : "localhost",
"port" : "50000",
"protocol" : "TCPIP",
"uid" : "DB2INST1",
"pwd" : "password",
"ssl" : ""
}
_environment = {
"jupyter" : True,
"qgrid" : True
}
_display = {
'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': False,
'defaultColumnWidth': 150,
'rowHeight': 28,
'enableColumnReorder': False,
'enableTextSelectionOnCells': True,
'editable': False,
'autoEdit': False,
'explicitInitialization': True,
'maxVisibleRows': 5,
'minVisibleRows': 5,
'sortable': True,
'filterable': False,
'highlightSelectedCell': False,
'highlightSelectedRow': True
}
# Connection settings for statements
_connected = False
_hdbc = None
_hdbi = None
_stmt = []
_stmtID = []
_stmtSQL = []
_vars = {}
_macros = {}
_flags = []
_debug = False
# Db2 Error Messages and Codes
sqlcode = 0
sqlstate = "0"
sqlerror = ""
sqlelapsed = 0
# Check to see if QGrid is installed
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
# Check if we are running in iPython or Jupyter
try:
if (get_ipython().config == {}):
_environment['jupyter'] = False
_environment['qgrid'] = False
else:
_environment['jupyter'] = True
except:
_environment['jupyter'] = False
_environment['qgrid'] = False
###Output
_____no_output_____
###Markdown
OptionsThere are four options that can be set with the **`%sql`** command. These options are shown below with the default value shown in parenthesis.- **`MAXROWS n (10)`** - The maximum number of rows that will be displayed before summary information is shown. If the answer set is less than this number of rows, it will be completely shown on the screen. If the answer set is larger than this amount, only the first 5 rows and last 5 rows of the answer set will be displayed. If you want to display a very large answer set, you may want to consider using the grid option `-g` to display the results in a scrollable table. If you really want to show all results then setting MAXROWS to -1 will return all output.- **`MAXGRID n (5)`** - The maximum size of a grid display. When displaying a result set in a grid `-g`, the default size of the display window is 5 rows. You can set this to a larger size so that more rows are shown on the screen. Note that the minimum size always remains at 5 which means that if the system is unable to display your maximum row size it will reduce the table display until it fits.- **`DISPLAY PANDAS | GRID (PANDAS)`** - Display the results as a PANDAS dataframe (default) or as a scrollable GRID- **`RUNTIME n (1)`** - When using the timer option on a SQL statement, the statement will execute for **`n`** number of seconds. The result that is returned is the number of times the SQL statement executed rather than the execution time of the statement. The default value for runtime is one second, so if the SQL is very complex you will need to increase the run time.- **`LIST`** - Display the current settingsTo set an option use the following syntax:```%sql option option_name value option_name value ....```The following example sets all options:```%sql option maxrows 100 runtime 2 display grid maxgrid 10```The values will **not** be saved between Jupyter notebooks sessions. If you need to retrieve the current options values, use the LIST command as the only argument:```%sql option list```
###Code
def setOptions(inSQL):
global _settings, _display
cParms = inSQL.split()
cnt = 0
while cnt < len(cParms):
if cParms[cnt].upper() == 'MAXROWS':
if cnt+1 < len(cParms):
try:
_settings["maxrows"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid MAXROWS value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'MAXGRID':
if cnt+1 < len(cParms):
try:
maxgrid = int(cParms[cnt+1])
if (maxgrid <= 5): # Minimum window size is 5
maxgrid = 5
_display["maxVisibleRows"] = int(cParms[cnt+1])
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
except Exception as err:
errormsg("Invalid MAXGRID value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'RUNTIME':
if cnt+1 < len(cParms):
try:
_settings["runtime"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid RUNTIME value provided.")
pass
cnt = cnt + 1
else:
errormsg("No value provided for the RUNTIME option.")
return
elif cParms[cnt].upper() == 'DISPLAY':
if cnt+1 < len(cParms):
if (cParms[cnt+1].upper() == 'GRID'):
_settings["display"] = 'GRID'
elif (cParms[cnt+1].upper() == 'PANDAS'):
_settings["display"] = 'PANDAS'
else:
errormsg("Invalid DISPLAY value provided.")
cnt = cnt + 1
else:
errormsg("No value provided for the DISPLAY option.")
return
elif (cParms[cnt].upper() == 'LIST'):
print("(MAXROWS) Maximum number of rows displayed: " + str(_settings["maxrows"]))
print("(MAXGRID) Maximum grid display size: " + str(_settings["maxgrid"]))
print("(RUNTIME) How many seconds to a run a statement for performance testing: " + str(_settings["runtime"]))
print("(DISPLAY) Use PANDAS or GRID display format for output: " + _settings["display"])
return
else:
cnt = cnt + 1
save_settings()
###Output
_____no_output_____
###Markdown
SQL HelpThe calling format of this routine is:```sqlhelp()```This code displays help related to the %sql magic command. This help is displayed when you issue a %sql or %%sql command by itself, or use the %sql -h flag.
###Code
def sqlhelp():
global _environment
if (_environment["jupyter"] == True):
sd = '<td style="text-align:left;">'
ed1 = '</td>'
ed2 = '</td>'
sh = '<th style="text-align:left;">'
eh1 = '</th>'
eh2 = '</th>'
sr = '<tr>'
er = '</tr>'
helpSQL = """
<h3>SQL Options</h3>
<p>The following options are available as part of a SQL statement. The options are always preceded with a
minus sign (i.e. -q).
<table>
{sr}
{sh}Option{eh1}{sh}Description{eh2}
{er}
{sr}
{sd}a, all{ed1}{sd}Return all rows in answer set and do not limit display{ed2}
{er}
{sr}
{sd}d{ed1}{sd}Change SQL delimiter to "@" from ";"{ed2}
{er}
{sr}
{sd}e, echo{ed1}{sd}Echo the SQL command that was generated after macro and variable substituion.{ed2}
{er}
{sr}
{sd}h, help{ed1}{sd}Display %sql help information.{ed2}
{er}
{sr}
{sd}j{ed1}{sd}Create a pretty JSON representation. Only the first column is formatted{ed2}
{er}
{sr}
{sd}json{ed1}{sd}Retrieve the result set as a JSON record{ed2}
{er}
{sr}
{sd}pb, bar{ed1}{sd}Plot the results as a bar chart{ed2}
{er}
{sr}
{sd}pl, line{ed1}{sd}Plot the results as a line chart{ed2}
{er}
{sr}
{sd}pp, pie{ed1}{sd}Plot Pie: Plot the results as a pie chart{ed2}
{er}
{sr}
{sd}q, quiet{ed1}{sd}Quiet results - no answer set or messages returned from the function{ed2}
{er}
{sr}
{sd}r, array{ed1}{sd}Return the result set as an array of values{ed2}
{er}
{sr}
{sd}sampledata{ed1}{sd}Create and load the EMPLOYEE and DEPARTMENT tables{ed2}
{er}
{sr}
{sd}t,time{ed1}{sd}Time the following SQL statement and return the number of times it executes in 1 second{ed2}
{er}
{sr}
{sd}grid{ed1}{sd}Display the results in a scrollable grid{ed2}
{er}
</table>
"""
else:
helpSQL = """
SQL Options
The following options are available as part of a SQL statement. Options are always
preceded with a minus sign (i.e. -q).
Option Description
a, all Return all rows in answer set and do not limit display
d Change SQL delimiter to "@" from ";"
e, echo Echo the SQL command that was generated after substitution
h, help Display %sql help information
j Create a pretty JSON representation. Only the first column is formatted
json Retrieve the result set as a JSON record
pb, bar Plot the results as a bar chart
pl, line Plot the results as a line chart
pp, pie Plot Pie: Plot the results as a pie chart
q, quiet Quiet results - no answer set or messages returned from the function
r, array Return the result set as an array of values
sampledata Create and load the EMPLOYEE and DEPARTMENT tables
t,time Time the SQL statement and return the execution count per second
grid Display the results in a scrollable grid
"""
helpSQL = helpSQL.format(**locals())
if (_environment["jupyter"] == True):
pdisplay(pHTML(helpSQL))
else:
print(helpSQL)
###Output
_____no_output_____
###Markdown
Connection HelpThe calling format of this routine is:```connected_help()```This code displays help related to the CONNECT command. This code is displayed when you issue a %sql CONNECT command with no arguments or you are running a SQL statement and there isn't any connection to a database yet.
###Code
def connected_help():
sd = '<td style="text-align:left;">'
ed = '</td>'
sh = '<th style="text-align:left;">'
eh = '</th>'
sr = '<tr>'
er = '</tr>'
if (_environment['jupyter'] == True):
helpConnect = """
<h3>Connecting to Db2</h3>
<p>The CONNECT command has the following format:
<p>
<pre>
%sql CONNECT TO <database> USER <userid> USING <password|?> HOST <ip address> PORT <port number> <SSL>
%sql CONNECT CREDENTIALS <varname>
%sql CONNECT CLOSE
%sql CONNECT RESET
%sql CONNECT PROMPT - use this to be prompted for values
</pre>
<p>
If you use a "?" for the password field, the system will prompt you for a password. This avoids typing the
password as clear text on the screen. If a connection is not successful, the system will print the error
message associated with the connect request.
<p>
The <b>CREDENTIALS</b> option allows you to use credentials that are supplied by Db2 on Cloud instances.
The credentials can be supplied as a variable and if successful, the variable will be saved to disk
for future use. If you create another notebook and use the identical syntax, if the variable
is not defined, the contents on disk will be used as the credentials. You should assign the
credentials to a variable that represents the database (or schema) that you are communicating with.
Using familiar names makes it easier to remember the credentials when connecting.
<p>
<b>CONNECT CLOSE</b> will close the current connection, but will not reset the database parameters. This means that
if you issue the CONNECT command again, the system should be able to reconnect you to the database.
<p>
<b>CONNECT RESET</b> will close the current connection and remove any information on the connection. You will need
to issue a new CONNECT statement with all of the connection information.
<p>
If the connection is successful, the parameters are saved on your system and will be used the next time you
run an SQL statement, or when you issue the %sql CONNECT command with no parameters.
<p>If you issue CONNECT RESET, all of the current values will be deleted and you will need to
issue a new CONNECT statement.
<p>A CONNECT command without any parameters will attempt to re-connect to the previous database you
were using. If the connection could not be established, the program to prompt you for
the values. To cancel the connection attempt, enter a blank value for any of the values. The connection
panel will request the following values in order to connect to Db2:
<table>
{sr}
{sh}Setting{eh}
{sh}Description{eh}
{er}
{sr}
{sd}Database{ed}{sd}Database name you want to connect to.{ed}
{er}
{sr}
{sd}Hostname{ed}
{sd}Use localhost if Db2 is running on your own machine, but this can be an IP address or host name.
{er}
{sr}
{sd}PORT{ed}
{sd}The port to use for connecting to Db2. This is usually 50000.{ed}
{er}
{sr}
{sd}SSL{ed}
{sd}If you are connecting to a secure port (50001) with SSL then you must include this keyword in the connect string.{ed}
{sr}
{sd}Userid{ed}
{sd}The userid to use when connecting (usually DB2INST1){ed}
{er}
{sr}
{sd}Password{ed}
{sd}No password is provided so you have to enter a value{ed}
{er}
</table>
"""
else:
helpConnect = """\
Connecting to Db2
The CONNECT command has the following format:
%sql CONNECT TO database USER userid USING password | ?
HOST ip address PORT port number SSL
%sql CONNECT CREDENTIALS varname
%sql CONNECT CLOSE
%sql CONNECT RESET
If you use a "?" for the password field, the system will prompt you for a password.
This avoids typing the password as clear text on the screen. If a connection is
not successful, the system will print the error message associated with the connect
request.
The CREDENTIALS option allows you to use credentials that are supplied by Db2 on
Cloud instances. The credentials can be supplied as a variable and if successful,
the variable will be saved to disk for future use. If you create another notebook
and use the identical syntax, if the variable is not defined, the contents on disk
will be used as the credentials. You should assign the credentials to a variable
that represents the database (or schema) that you are communicating with. Using
familiar names makes it easier to remember the credentials when connecting.
CONNECT CLOSE will close the current connection, but will not reset the database
parameters. This means that if you issue the CONNECT command again, the system
should be able to reconnect you to the database.
CONNECT RESET will close the current connection and remove any information on the
connection. You will need to issue a new CONNECT statement with all of the connection
information.
If the connection is successful, the parameters are saved on your system and will be
used the next time you run an SQL statement, or when you issue the %sql CONNECT
command with no parameters. If you issue CONNECT RESET, all of the current values
will be deleted and you will need to issue a new CONNECT statement.
A CONNECT command without any parameters will attempt to re-connect to the previous
database you were using. If the connection could not be established, the program to
prompt you for the values. To cancel the connection attempt, enter a blank value for
any of the values. The connection panel will request the following values in order
to connect to Db2:
Setting Description
Database Database name you want to connect to
Hostname Use localhost if Db2 is running on your own machine, but this can
be an IP address or host name.
PORT The port to use for connecting to Db2. This is usually 50000.
Userid The userid to use when connecting (usually DB2INST1)
Password No password is provided so you have to enter a value
SSL Include this keyword to indicate you are connecting via SSL (usually port 50001)
"""
helpConnect = helpConnect.format(**locals())
if (_environment['jupyter'] == True):
pdisplay(pHTML(helpConnect))
else:
print(helpConnect)
###Output
_____no_output_____
###Markdown
Prompt for Connection InformationIf you are running an SQL statement and have not yet connected to a database, the %sql command will prompt you for connection information. In order to connect to a database, you must supply:- Database name - Host name (IP address or name)- Port number- Userid- Password- Secure socketThe routine is called without any parameters:```connected_prompt()```
###Code
# Prompt for Connection information
def connected_prompt():
global _settings
_database = ''
_hostname = ''
_port = ''
_uid = ''
_pwd = ''
_ssl = ''
print("Enter the database connection details (Any empty value will cancel the connection)")
_database = input("Enter the database name: ");
if (_database.strip() == ""): return False
_hostname = input("Enter the HOST IP address or symbolic name: ");
if (_hostname.strip() == ""): return False
_port = input("Enter the PORT number: ");
if (_port.strip() == ""): return False
_ssl = input("Is this a secure (SSL) port (y or n)");
if (_ssl.strip() == ""): return False
if (_ssl == "n"):
_ssl = ""
else:
_ssl = "Security=SSL;"
_uid = input("Enter Userid on the DB2 system: ").upper();
if (_uid.strip() == ""): return False
_pwd = getpass.getpass("Password [password]: ");
if (_pwd.strip() == ""): return False
_settings["database"] = _database.strip()
_settings["hostname"] = _hostname.strip()
_settings["port"] = _port.strip()
_settings["uid"] = _uid.strip()
_settings["pwd"] = _pwd.strip()
_settings["ssl"] = _ssl.strip()
_settings["maxrows"] = 10
_settings["maxgrid"] = 5
_settings["runtime"] = 1
return True
# Split port and IP addresses
def split_string(in_port,splitter=":"):
# Split input into an IP address and Port number
global _settings
checkports = in_port.split(splitter)
ip = checkports[0]
if (len(checkports) > 1):
port = checkports[1]
else:
port = None
return ip, port
###Output
_____no_output_____
###Markdown
Connect Syntax ParserThe parseConnect routine is used to parse the CONNECT command that the user issued within the %sql command. The format of the command is:```parseConnect(inSQL)```The inSQL string contains the CONNECT keyword with some additional parameters. The format of the CONNECT command is one of:```CONNECT RESETCONNECT CLOSECONNECT CREDENTIALS CONNECT TO database USER userid USING password HOST hostname PORT portnumber ```If you have credentials available from Db2 on Cloud, place the contents of the credentials into a variable and then use the `CONNECT CREDENTIALS ` syntax to connect to the database.In addition, supplying a question mark (?) for password will result in the program prompting you for the password rather than having it as clear text in your scripts.When all of the information is checked in the command, the db2_doConnect function is called to actually do the connection to the database.
###Code
# Parse the CONNECT statement and execute if possible
def parseConnect(inSQL,local_ns):
global _settings, _connected
_connected = False
cParms = inSQL.split()
cnt = 0
_settings["ssl"] = ""
while cnt < len(cParms):
if cParms[cnt].upper() == 'TO':
if cnt+1 < len(cParms):
_settings["database"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No database specified in the CONNECT statement")
return
elif cParms[cnt].upper() == "SSL":
_settings["ssl"] = "Security=SSL;"
cnt = cnt + 1
elif cParms[cnt].upper() == 'CREDENTIALS':
if cnt+1 < len(cParms):
credentials = cParms[cnt+1]
tempid = eval(credentials,local_ns)
if (isinstance(tempid,dict) == False):
errormsg("The CREDENTIALS variable (" + credentials + ") does not contain a valid Python dictionary (JSON object)")
return
if (tempid == None):
fname = credentials + ".pickle"
try:
with open(fname,'rb') as f:
_id = pickle.load(f)
except:
errormsg("Unable to find credential variable or file.")
return
else:
_id = tempid
try:
_settings["database"] = _id["db"]
_settings["hostname"] = _id["hostname"]
_settings["port"] = _id["port"]
_settings["uid"] = _id["username"]
_settings["pwd"] = _id["password"]
try:
fname = credentials + ".pickle"
with open(fname,'wb') as f:
pickle.dump(_id,f)
except:
errormsg("Failed trying to write Db2 Credentials.")
return
except:
errormsg("Credentials file is missing information. db/hostname/port/username/password required.")
return
else:
errormsg("No Credentials name supplied")
return
cnt = cnt + 1
elif cParms[cnt].upper() == 'USER':
if cnt+1 < len(cParms):
_settings["uid"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No userid specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'USING':
if cnt+1 < len(cParms):
_settings["pwd"] = cParms[cnt+1]
if (_settings["pwd"] == '?'):
_settings["pwd"] = getpass.getpass("Password [password]: ") or "password"
cnt = cnt + 1
else:
errormsg("No password specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'HOST':
if cnt+1 < len(cParms):
hostport = cParms[cnt+1].upper()
ip, port = split_string(hostport)
if (port == None): _settings["port"] = "50000"
_settings["hostname"] = ip
cnt = cnt + 1
else:
errormsg("No hostname specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PORT':
if cnt+1 < len(cParms):
_settings["port"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No port specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PROMPT':
if (connected_prompt() == False):
print("Connection canceled.")
return
else:
cnt = cnt + 1
elif cParms[cnt].upper() in ('CLOSE','RESET') :
try:
result = ibm_db.close(_hdbc)
_hdbi.close()
except:
pass
success("Connection closed.")
if cParms[cnt].upper() == 'RESET':
_settings["database"] = ''
return
else:
cnt = cnt + 1
_ = db2_doConnect()
###Output
_____no_output_____
###Markdown
Connect to Db2The db2_doConnect routine is called when a connection needs to be established to a Db2 database. The command does not require any parameters since it relies on the settings variable which contains all of the information it needs to connect to a Db2 database.```db2_doConnect()```There are 4 additional variables that are used throughout the routines to stay connected with the Db2 database. These variables are:- hdbc - The connection handle to the database- hstmt - A statement handle used for executing SQL statements- connected - A flag that tells the program whether or not we are currently connected to a database- runtime - Used to tell %sql the length of time (default 1 second) to run a statement when timing itThe only database driver that is used in this program is the IBM DB2 ODBC DRIVER. This driver needs to be loaded on the system that is connecting to Db2. The Jupyter notebook that is built by this system installs the driver for you so you shouldn't have to do anything other than build the container.If the connection is successful, the connected flag is set to True. Any subsequent %sql call will check to see if you are connected and initiate another prompted connection if you do not have a connection to a database.
###Code
def db2_doConnect():
global _hdbc, _hdbi, _connected, _runtime
global _settings
if _connected == False:
if len(_settings["database"]) == 0:
return False
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;"
"UID={3};"
"PWD={4};{5}").format(_settings["database"],
_settings["hostname"],
_settings["port"],
_settings["uid"],
_settings["pwd"],
_settings["ssl"])
# Get a database handle (hdbc) and a statement handle (hstmt) for subsequent access to DB2
try:
_hdbc = ibm_db.connect(dsn, "", "")
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
try:
_hdbi = ibm_db_dbi.Connection(_hdbc)
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
_connected = True
# Save the values for future use
save_settings()
success("Connection successful.")
return True
###Output
_____no_output_____
###Markdown
Load/Save SettingsThere are two routines that load and save settings between Jupyter notebooks. These routines are called without any parameters.```load_settings() save_settings()```There is a global structure called settings which contains the following fields:```_settings = { "maxrows" : 10, "maxgrid" : 5, "runtime" : 1, "display" : "TEXT", "database" : "", "hostname" : "localhost", "port" : "50000", "protocol" : "TCPIP", "uid" : "DB2INST1", "pwd" : "password"}```The information in the settings structure is used for re-connecting to a database when you start up a Jupyter notebook. When the session is established for the first time, the load_settings() function is called to get the contents of the pickle file (db2connect.pickle, a Jupyter session file) that will be used for the first connection to the database. Whenever a new connection is made, the file is updated with the save_settings() function.
###Code
def load_settings():
# This routine will load the settings from the previous session if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'rb') as f:
_settings = pickle.load(f)
# Reset runtime to 1 since it would be unexpected to keep the same value between connections
_settings["runtime"] = 1
_settings["maxgrid"] = 5
except:
pass
return
def save_settings():
# This routine will save the current settings if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'wb') as f:
pickle.dump(_settings,f)
except:
errormsg("Failed trying to write Db2 Configuration Information.")
return
###Output
_____no_output_____
###Markdown
Error and Message FunctionsThere are three types of messages that are thrown by the %db2 magic command. The first routine will print out a success message with no special formatting:```success(message)```The second message is used for displaying an error message that is not associated with a SQL error. This type of error message is surrounded with a red box to highlight the problem. Note that the success message has code that has been commented out that could also show a successful return code with a green box. ```errormsg(message)```The final error message is based on an error occuring in the SQL code that was executed. This code will parse the message returned from the ibm_db interface and parse it to return only the error message portion (and not all of the wrapper code from the driver).```db2_error(quiet,connect=False)```The quiet flag is passed to the db2_error routine so that messages can be suppressed if the user wishes to ignore them with the -q flag. A good example of this is dropping a table that does not exist. We know that an error will be thrown so we can ignore it. The information that the db2_error routine gets is from the stmt_errormsg() function from within the ibm_db driver. The db2_error function should only be called after a SQL failure otherwise there will be no diagnostic information returned from stmt_errormsg().If the connect flag is True, the routine will get the SQLSTATE and SQLCODE from the connection error message rather than a statement error message.
###Code
def db2_error(quiet,connect=False):
global sqlerror, sqlcode, sqlstate, _environment
try:
if (connect == False):
errmsg = ibm_db.stmt_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
else:
errmsg = ibm_db.conn_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
sqlerror = errmsg
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
except:
errmsg = "Unknown error."
sqlcode = -99999
sqlstate = "-99999"
sqlerror = errmsg
return
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
if quiet == True: return
if (errmsg == ""): return
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html+errmsg+"</p>"))
else:
print(errmsg)
# Print out an error message
def errormsg(message):
global _environment
if (message != ""):
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + message + "</p>"))
else:
print(message)
def success(message):
if (message != ""):
print(message)
return
def debug(message,error=False):
global _environment
if (_environment["jupyter"] == True):
spacer = "<br>" + " "
else:
spacer = "\n "
if (message != ""):
lines = message.split('\n')
msg = ""
indent = 0
for line in lines:
delta = line.count("(") - line.count(")")
if (msg == ""):
msg = line
indent = indent + delta
else:
if (delta < 0): indent = indent + delta
msg = msg + spacer * (indent*2) + line
if (delta > 0): indent = indent + delta
if (indent < 0): indent = 0
if (error == True):
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
else:
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#008000; background-color:#e6ffe6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + msg + "</pre></p>"))
else:
print(msg)
return
###Output
_____no_output_____
###Markdown
Macro ProcessorA macro is used to generate SQL to be executed by overriding or creating a new keyword. For instance, the base `%sql` command does not understand the `LIST TABLES` command which is usually used in conjunction with the `CLP` processor. Rather than specifically code this in the base `db2.ipynb` file, we can create a macro that can execute this code for us.There are three routines that deal with macros. - checkMacro is used to find the macro calls in a string. All macros are sent to parseMacro for checking.- runMacro will evaluate the macro and return the string to the parse- subvars is used to track the variables used as part of a macro call.- setMacro is used to catalog a macro Set MacroThis code will catalog a macro call.
###Code
def setMacro(inSQL,parms):
global _macros
names = parms.split()
if (len(names) < 2):
errormsg("No command name supplied.")
return None
macroName = names[1].upper()
_macros[macroName] = inSQL
return
###Output
_____no_output_____
###Markdown
Check MacroThis code will check to see if there is a macro command in the SQL. It will take the SQL that is supplied and strip out three values: the first and second keywords, and the remainder of the parameters.For instance, consider the following statement:```CREATE DATABASE GEORGE options....```The name of the macro that we want to run is called `CREATE`. We know that there is a SQL command called `CREATE` but this code will call the macro first to see if needs to run any special code. For instance, `CREATE DATABASE` is not part of the `db2.ipynb` syntax, but we can add it in by using a macro.The check macro logic will strip out the subcommand (`DATABASE`) and place the remainder of the string after `DATABASE` in options.
###Code
def checkMacro(in_sql):
global _macros
if (len(in_sql) == 0): return(in_sql) # Nothing to do
tokens = parseArgs(in_sql,None) # Take the string and reduce into tokens
macro_name = tokens[0].upper() # Uppercase the name of the token
if (macro_name not in _macros):
return(in_sql) # No macro by this name so just return the string
result = runMacro(_macros[macro_name],in_sql,tokens) # Execute the macro using the tokens we found
return(result) # Runmacro will either return the original SQL or the new one
###Output
_____no_output_____
###Markdown
Split AssignmentThis routine will return the name of a variable and it's value when the format is x=y. If y is enclosed in quotes, the quotes are removed.
###Code
def splitassign(arg):
var_name = "null"
var_value = "null"
arg = arg.strip()
eq = arg.find("=")
if (eq != -1):
var_name = arg[:eq].strip()
temp_value = arg[eq+1:].strip()
if (temp_value != ""):
ch = temp_value[0]
if (ch in ["'",'"']):
if (temp_value[-1:] == ch):
var_value = temp_value[1:-1]
else:
var_value = temp_value
else:
var_value = temp_value
else:
var_value = arg
return var_name, var_value
###Output
_____no_output_____
###Markdown
Parse Args The commands that are used in the macros need to be parsed into their separate tokens. The tokens are separated by blanks and strings that enclosed in quotes are kept together.
###Code
def parseArgs(argin,_vars):
quoteChar = ""
inQuote = False
inArg = True
args = []
arg = ''
for ch in argin.lstrip():
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
arg = arg + ch #z
else:
arg = arg + ch
elif (ch == "\"" or ch == "\'"): # Do we have a quote
quoteChar = ch
arg = arg + ch #z
inQuote = True
elif (ch == " "):
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
else:
args.append("null")
arg = ""
else:
arg = arg + ch
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
return(args)
###Output
_____no_output_____
###Markdown
Run MacroThis code will execute the body of the macro and return the results for that macro call.
###Code
def runMacro(script,in_sql,tokens):
result = ""
runIT = True
code = script.split("\n")
level = 0
runlevel = [True,False,False,False,False,False,False,False,False,False]
ifcount = 0
_vars = {}
for i in range(0,len(tokens)):
vstr = str(i)
_vars[vstr] = tokens[i]
if (len(tokens) == 0):
_vars["argc"] = "0"
else:
_vars["argc"] = str(len(tokens)-1)
for line in code:
line = line.strip()
if (line == "" or line == "\n"): continue
if (line[0] == "#"): continue # A comment line starts with a # in the first position of the line
args = parseArgs(line,_vars) # Get all of the arguments
if (args[0] == "if"):
ifcount = ifcount + 1
if (runlevel[level] == False): # You can't execute this statement
continue
level = level + 1
if (len(args) < 4):
print("Macro: Incorrect number of arguments for the if clause.")
return insql
arg1 = args[1]
arg2 = args[3]
if (len(arg2) > 2):
ch1 = arg2[0]
ch2 = arg2[-1:]
if (ch1 in ['"',"'"] and ch1 == ch2):
arg2 = arg2[1:-1].strip()
op = args[2]
if (op in ["=","=="]):
if (arg1 == arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<=","=<"]):
if (arg1 <= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">=","=>"]):
if (arg1 >= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<>","!="]):
if (arg1 != arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<"]):
if (arg1 < arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">"]):
if (arg1 > arg2):
runlevel[level] = True
else:
runlevel[level] = False
else:
print("Macro: Unknown comparison operator in the if statement:" + op)
continue
elif (args[0] in ["exit","echo"] and runlevel[level] == True):
msg = ""
for msgline in args[1:]:
if (msg == ""):
msg = subvars(msgline,_vars)
else:
msg = msg + " " + subvars(msgline,_vars)
if (msg != ""):
if (args[0] == "echo"):
debug(msg,error=False)
else:
debug(msg,error=True)
if (args[0] == "exit"): return ''
elif (args[0] == "pass" and runlevel[level] == True):
pass
elif (args[0] == "var" and runlevel[level] == True):
value = ""
for val in args[2:]:
if (value == ""):
value = subvars(val,_vars)
else:
value = value + " " + subvars(val,_vars)
value.strip()
_vars[args[1]] = value
elif (args[0] == 'else'):
if (ifcount == level):
runlevel[level] = not runlevel[level]
elif (args[0] == 'return' and runlevel[level] == True):
return(result)
elif (args[0] == "endif"):
ifcount = ifcount - 1
if (ifcount < level):
level = level - 1
if (level < 0):
print("Macro: Unmatched if/endif pairs.")
return ''
else:
if (runlevel[level] == True):
if (result == ""):
result = subvars(line,_vars)
else:
result = result + "\n" + subvars(line,_vars)
return(result)
###Output
_____no_output_____
###Markdown
Substitute VarsThis routine is used by the runMacro program to track variables that are used within Macros. These are kept separate from the rest of the code.
###Code
def subvars(script,_vars):
if (_vars == None): return script
remainder = script
result = ""
done = False
while done == False:
bv = remainder.find("{")
if (bv == -1):
done = True
continue
ev = remainder.find("}")
if (ev == -1):
done = True
continue
result = result + remainder[:bv]
vvar = remainder[bv+1:ev]
remainder = remainder[ev+1:]
upper = False
allvars = False
if (vvar[0] == "^"):
upper = True
vvar = vvar[1:]
elif (vvar[0] == "*"):
vvar = vvar[1:]
allvars = True
else:
pass
if (vvar in _vars):
if (upper == True):
items = _vars[vvar].upper()
elif (allvars == True):
try:
iVar = int(vvar)
except:
return(script)
items = ""
sVar = str(iVar)
while sVar in _vars:
if (items == ""):
items = _vars[sVar]
else:
items = items + " " + _vars[sVar]
iVar = iVar + 1
sVar = str(iVar)
else:
items = _vars[vvar]
else:
if (allvars == True):
items = ""
else:
items = "null"
result = result + items
if (remainder != ""):
result = result + remainder
return(result)
###Output
_____no_output_____
###Markdown
SQL TimerThe calling format of this routine is:```count = sqlTimer(hdbc, runtime, inSQL)```This code runs the SQL string multiple times for one second (by default). The accuracy of the clock is not that great when you are running just one statement, so instead this routine will run the code multiple times for a second to give you an execution count. If you need to run the code for more than one second, the runtime value needs to be set to the number of seconds you want the code to run.The return result is always the number of times that the code executed. Note, that the program will skip reading the data if it is a SELECT statement so it doesn't included fetch time for the answer set.
###Code
def sqlTimer(hdbc, runtime, inSQL):
count = 0
t_end = time.time() + runtime
while time.time() < t_end:
try:
stmt = ibm_db.exec_immediate(hdbc,inSQL)
if (stmt == False):
db2_error(flag(["-q","-quiet"]))
return(-1)
ibm_db.free_result(stmt)
except Exception as err:
db2_error(False)
return(-1)
count = count + 1
return(count)
###Output
_____no_output_____
###Markdown
Split ArgsThis routine takes as an argument a string and then splits the arguments according to the following logic:* If the string starts with a `(` character, it will check the last character in the string and see if it is a `)` and then remove those characters* Every parameter is separated by a comma `,` and commas within quotes are ignored* Each parameter returned will have three values returned - one for the value itself, an indicator which will be either True if it was quoted, or False if not, and True or False if it is numeric.Example:``` "abcdef",abcdef,456,"856"```Three values would be returned:```[abcdef,True,False],[abcdef,False,False],[456,False,True],[856,True,False]```Any quoted string will be False for numeric. The way that the parameters are handled are up to the calling program. However, in the case of Db2, the quoted strings must be in single quotes so any quoted parameter using the double quotes `"` must be wrapped with single quotes. There is always a possibility that a string contains single quotes (i.e. O'Connor) so any substituted text should use `''` so that Db2 can properly interpret the string. This routine does not adjust the strings with quotes, and depends on the variable subtitution routine to do that.
###Code
def splitargs(arguments):
import types
# String the string and remove the ( and ) characters if they at the beginning and end of the string
results = []
step1 = arguments.strip()
if (len(step1) == 0): return(results) # Not much to do here - no args found
if (step1[0] == '('):
if (step1[-1:] == ')'):
step2 = step1[1:-1]
step2 = step2.strip()
else:
step2 = step1
else:
step2 = step1
# Now we have a string without brackets. Start scanning for commas
quoteCH = ""
pos = 0
arg = ""
args = []
while pos < len(step2):
ch = step2[pos]
if (quoteCH == ""): # Are we in a quote?
if (ch in ('"',"'")): # Check to see if we are starting a quote
quoteCH = ch
arg = arg + ch
pos += 1
elif (ch == ","): # Are we at the end of a parameter?
arg = arg.strip()
args.append(arg)
arg = ""
inarg = False
pos += 1
else: # Continue collecting the string
arg = arg + ch
pos += 1
else:
if (ch == quoteCH): # Are we at the end of a quote?
arg = arg + ch # Add the quote to the string
pos += 1 # Increment past the quote
quoteCH = "" # Stop quote checking (maybe!)
else:
pos += 1
arg = arg + ch
if (quoteCH != ""): # So we didn't end our string
arg = arg.strip()
args.append(arg)
elif (arg != ""): # Something left over as an argument
arg = arg.strip()
args.append(arg)
else:
pass
results = []
for arg in args:
result = []
if (len(arg) > 0):
if (arg[0] in ('"',"'")):
value = arg[1:-1]
isString = True
isNumber = False
else:
isString = False
isNumber = False
try:
value = eval(arg)
if (type(value) == int):
isNumber = True
elif (isinstance(value,float) == True):
isNumber = True
else:
value = arg
except:
value = arg
else:
value = ""
isString = False
isNumber = False
result = [value,isString,isNumber]
results.append(result)
return results
###Output
_____no_output_____
###Markdown
DataFrame Table CreationWhen using dataframes, it is sometimes useful to use the definition of the dataframe to create a Db2 table. The format of the command is:```%sql using create table [with data | columns asis]```The value is the name of the dataframe, not the contents (`:df`). The definition of the data types in the dataframe will be used to create the Db2 table using typical Db2 data types rather than generic CLOBs and FLOAT for numeric objects. The two options are used to handle how the conversion is done. If you supply `with data`, the contents of the df will be inserted into the table, otherwise the table is defined only. The column names will be uppercased and special characters (like blanks) will be replaced with underscores. If `columns asis` is specified, the column names will remain the same as in the dataframe, with each name using quotes to guarantee the same spelling as in the DF. If the table already exists, the command will not run and an error message will be produced.
###Code
def createDF(hdbc,sqlin,local_ns):
import datetime
import ibm_db
global sqlcode
# Strip apart the command into tokens based on spaces
tokens = sqlin.split()
token_count = len(tokens)
if (token_count < 5): # Not enough parameters
errormsg("Insufficient arguments for USING command. %sql using df create table name [with data | columns asis]")
return
keyword_command = tokens[0].upper()
dfName = tokens[1]
keyword_create = tokens[2].upper()
keyword_table = tokens[3].upper()
table = tokens[4]
if (keyword_create not in ("CREATE","REPLACE") or keyword_table != "TABLE"):
errormsg("Incorrect syntax: %sql using <df> create table <name> [options]")
return
if (token_count % 2 != 1):
errormsg("Insufficient arguments for USING command. %sql using df create table name [with data | columns asis | keep float]")
return
flag_withdata = False
flag_asis = False
flag_float = False
flag_integer = False
limit = -1
if (keyword_create == "REPLACE"):
%sql -q DROP TABLE {table}
for token_idx in range(5,token_count,2):
option_key = tokens[token_idx].upper()
option_val = tokens[token_idx+1].upper()
if (option_key == "WITH" and option_val == "DATA"):
flag_withdata = True
elif (option_key == "COLUMNS" and option_val == "ASIS"):
flag_asis = True
elif (option_key == "KEEP" and option_val == "FLOAT64"):
flag_float = True
elif (option_key == "KEEP" and option_val == "INT64"):
flag_integer = True
elif (option_key == "LIMIT"):
if (option_val.isnumeric() == False):
errormsg("The LIMIT must be a valid number from -1 (unlimited) to the maximun number of rows to insert")
return
limit = int(option_val)
else:
errormsg("Invalid options. Must be either WITH DATA | COLUMNS ASIS | KEEP FLOAT64 | KEEP FLOAT INT64")
return
dfName = tokens[1]
if (dfName not in local_ns):
errormsg("The variable ({dfName}) does not exist in the local variable list.")
return
try:
df_value = eval(dfName,None,local_ns) # globals()[varName] # eval(varName)
except:
errormsg("The variable ({dfName}) does not contain a value.")
return
if (isinstance(df_value,pandas.DataFrame) == False): # Not a Pandas dataframe
errormsg("The variable ({dfName}) is not a Pandas dataframe.")
return
sql = []
columns = dict(df_value.dtypes)
sql.append(f'CREATE TABLE {table} (')
datatypes = []
comma = ""
for column in columns:
datatype = columns[column]
if (datatype == "object"):
datapoint = df_value[column][0]
if (isinstance(datapoint,datetime.datetime)):
type = "TIMESTAMP"
elif (isinstance(datapoint,datetime.time)):
type = "TIME"
elif (isinstance(datapoint,datetime.date)):
type = "DATE"
elif (isinstance(datapoint,float)):
if (flag_float == True):
type = "FLOAT"
else:
type = "DECFLOAT"
elif (isinstance(datapoint,int)):
if (flag_integer == True):
type = "BIGINT"
else:
type = "INTEGER"
elif (isinstance(datapoint,str)):
maxlength = df_value[column].apply(str).apply(len).max()
type = f"VARCHAR({maxlength})"
else:
type = "CLOB"
elif (datatype == "int64"):
if (flag_integer == True):
type = "BIGINT"
else:
type = "INTEGER"
elif (datatype == "float64"):
if (flag_float == True):
type = "FLOAT"
else:
type = "DECFLOAT"
elif (datatype == "datetime64"):
type = "TIMESTAMP"
elif (datatype == "bool"):
type = "BINARY"
else:
type = "CLOB"
datatypes.append(type)
if (flag_asis == False):
if (isinstance(column,str) == False):
column = str(column)
identifier = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_"
column_name = column.strip().upper()
new_name = ""
for ch in column_name:
if (ch not in identifier):
new_name = new_name + "_"
else:
new_name = new_name + ch
new_name = new_name.lstrip('_').rstrip('_')
if (new_name == "" or new_name[0] not in "ABCDEFGHIJKLMNOPQRSTUVWXYZ"):
new_name = f'"{column}"'
else:
new_name = f'"{column}"'
sql.append(f" {new_name} {type}")
sql.append(")")
sqlcmd = ""
for i in range(0,len(sql)):
if (i > 0 and i < len(sql)-2):
comma = ","
else:
comma = ""
sqlcmd = "{}\n{}{}".format(sqlcmd,sql[i],comma)
print(sqlcmd)
%sql {sqlcmd}
if (sqlcode != 0):
return
if (flag_withdata == True):
autocommit = ibm_db.autocommit(hdbc)
ibm_db.autocommit(hdbc,False)
row_count = 0
insert_sql = ""
rows, cols = df_value.shape
for row in range(0,rows):
insert_row = ""
for col in range(0, cols):
value = df_value.iloc[row][col]
if (datatypes[col] == "CLOB" or "VARCHAR" in datatypes[col]):
value = str(value)
value = addquotes(value,True)
elif (datatypes[col] in ("TIME","DATE","TIMESTAMP")):
value = str(value)
value = addquotes(value,True)
elif (datatypes[col] in ("INTEGER","DECFLOAT","FLOAT","BINARY")):
strvalue = str(value)
if ("NAN" in strvalue.upper()):
value = "NULL"
else:
value = str(value)
value = addquotes(value,True)
if (insert_row == ""):
insert_row = f"{value}"
else:
insert_row = f"{insert_row},{value}"
if (insert_sql == ""):
insert_sql = f"INSERT INTO {table} VALUES ({insert_row})"
else:
insert_sql = f"{insert_sql},({insert_row})"
row_count += 1
if (row_count % 1000 == 0 or row_count == limit):
result = ibm_db.exec_immediate(hdbc, insert_sql) # Run it
if (result == False): # Error executing the code
db2_error(False)
return
ibm_db.commit(hdbc)
print(f"\r{row_count} of {rows} rows inserted.",end="")
insert_sql = ""
if (row_count == limit):
break
if (insert_sql != ""):
result = ibm_db.exec_immediate(hdbc, insert_sql) # Run it
if (result == False): # Error executing the code
db2_error(False)
ibm_db.commit(hdbc)
ibm_db.autocommit(hdbc,autocommit)
print("\nInsert completed.")
return
###Output
_____no_output_____
###Markdown
SQL ParserThe calling format of this routine is:```sql_cmd, encoded_sql = sqlParser(sql_input)```This code will look at the SQL string that has been passed to it and parse it into two values:- sql_cmd: First command in the list (so this may not be the actual SQL command)- encoded_sql: SQL with the parameters removed if there are any (replaced with ? markers)
###Code
def sqlParser(sqlin,local_ns):
sql_cmd = ""
encoded_sql = sqlin
firstCommand = "(?:^\s*)([a-zA-Z]+)(?:\s+.*|$)"
findFirst = re.match(firstCommand,sqlin)
if (findFirst == None): # We did not find a match so we just return the empty string
return sql_cmd, encoded_sql
cmd = findFirst.group(1)
sql_cmd = cmd.upper()
#
# Scan the input string looking for variables in the format :var. If no : is found just return.
# Var must be alpha+number+_ to be valid
#
if (':' not in sqlin): # A quick check to see if parameters are in here, but not fool-proof!
return sql_cmd, encoded_sql
inVar = False
inQuote = ""
varName = ""
encoded_sql = ""
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
PANDAS = 5
for ch in sqlin:
if (inVar == True): # We are collecting the name of a variable
if (ch.upper() in "@_ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789[]"):
varName = varName + ch
continue
else:
if (varName == ""):
encode_sql = encoded_sql + ":"
elif (varName[0] in ('[',']')):
encoded_sql = encoded_sql + ":" + varName
else:
if (ch == '.'): # If the variable name is stopped by a period, assume no quotes are used
flag_quotes = False
else:
flag_quotes = True
varValue, varType = getContents(varName,flag_quotes,local_ns)
if (varType != PANDAS and varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == RAW):
encoded_sql = encoded_sql + varValue
elif (varType == PANDAS):
insertsql = ""
coltypes = varValue.dtypes
rows, cols = varValue.shape
for row in range(0,rows):
insertrow = ""
for col in range(0, cols):
value = varValue.iloc[row][col]
if (coltypes[col] == "object"):
value = str(value)
value = addquotes(value,True)
else:
strvalue = str(value)
if ("NAN" in strvalue.upper()):
value = "NULL"
if (insertrow == ""):
insertrow = f"{value}"
else:
insertrow = f"{insertrow},{value}"
if (insertsql == ""):
insertsql = f"({insertrow})"
else:
insertsql = f"{insertsql},({insertrow})"
encoded_sql = encoded_sql + insertsql
elif (varType == LIST):
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
flag_quotes = True
try:
if (v.find('0x') == 0): # Just guessing this is a hex value at beginning
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
encoded_sql = encoded_sql + ch
varName = ""
inVar = False
elif (inQuote != ""):
encoded_sql = encoded_sql + ch
if (ch == inQuote): inQuote = ""
elif (ch in ("'",'"')):
encoded_sql = encoded_sql + ch
inQuote = ch
elif (ch == ":"): # This might be a variable
varName = ""
inVar = True
else:
encoded_sql = encoded_sql + ch
if (inVar == True):
varValue, varType = getContents(varName,True,local_ns) # We assume the end of a line is quoted
if (varType != PANDAS and varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == PANDAS):
insertsql = ""
coltypes = varValue.dtypes
rows, cols = varValue.shape
for row in range(0,rows):
insertrow = ""
for col in range(0, cols):
value = varValue.iloc[row][col]
if (coltypes[col] == "object"):
value = str(value)
value = addquotes(value,True)
else:
strvalue = str(value)
if ("NAN" in strvalue.upper()):
value = "NULL"
if (insertrow == ""):
insertrow = f"{value}"
else:
insertrow = f"{insertrow},{value}"
if (insertsql == ""):
insertsql = f"({insertrow})"
else:
insertsql = f"{insertsql},({insertrow})"
encoded_sql = encoded_sql + insertsql
elif (varType == LIST):
flag_quotes = True
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
try:
if (v.find('0x') == 0): # Just guessing this is a hex value
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
return sql_cmd, encoded_sql
###Output
_____no_output_____
###Markdown
Variable Contents FunctionThe calling format of this routine is:```value = getContents(varName,quote,name_space)```This code will take the name of a variable as input and return the contents of that variable. If the variable is not found then the program will return None which is the equivalent to empty or null. Note that this function looks at the global variable pool for Python so it is possible that the wrong version of variable is returned if it is used in different functions. For this reason, any variables used in SQL statements should use a unique namimg convention if possible.The other thing that this function does is replace single quotes with two quotes. The reason for doing this is that Db2 will convert two single quotes into one quote when dealing with strings. This avoids problems when dealing with text that contains multiple quotes within the string. Note that this substitution is done only for single quote characters since the double quote character is used by Db2 for naming columns that are case sensitive or contain special characters.If the quote value is True, the field will have quotes around it. The name_space is the variables currently that are registered in Python.
###Code
def getContents(varName,flag_quotes,local_ns):
#
# Get the contents of the variable name that is passed to the routine. Only simple
# variables are checked, i.e. arrays and lists are not parsed
#
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
DICT = 4
PANDAS = 5
try:
value = eval(varName,None,local_ns) # globals()[varName] # eval(varName)
except:
return(None,STRING)
if (isinstance(value,dict) == True): # Check to see if this is JSON dictionary
return(addquotes(value,flag_quotes),STRING)
elif(isinstance(value,list) == True): # List - tricky
return(value,LIST)
elif (isinstance(value,pandas.DataFrame) == True): # Pandas dataframe
return(value,PANDAS)
elif (isinstance(value,int) == True): # Integer value
return(value,NUMBER)
elif (isinstance(value,float) == True): # Float value
return(value,NUMBER)
else:
try:
# The pattern needs to be in the first position (0 in Python terms)
if (value.find('0x') == 0): # Just guessing this is a hex value
return(value,RAW)
else:
return(addquotes(value,flag_quotes),STRING) # String
except:
return(addquotes(str(value),flag_quotes),RAW)
###Output
_____no_output_____
###Markdown
Add QuotesQuotes are a challenge when dealing with dictionaries and Db2. Db2 wants strings delimited with single quotes, while Dictionaries use double quotes. That wouldn't be a problems except imbedded single quotes within these dictionaries will cause things to fail. This routine attempts to double-quote the single quotes within the dicitonary.
###Code
def addquotes(inString,flag_quotes):
if (isinstance(inString,dict) == True): # Check to see if this is JSON dictionary
serialized = json.dumps(inString)
else:
serialized = inString
# Replace single quotes with '' (two quotes) and wrap everything in single quotes
if (flag_quotes == False):
return(serialized)
else:
return("'"+serialized.replace("'","''")+"'") # Convert single quotes to two single quotes
###Output
_____no_output_____
###Markdown
Create the SAMPLE Database TablesThe calling format of this routine is:```db2_create_sample(quiet)```There are a lot of examples that depend on the data within the SAMPLE database. If you are running these examples and the connection is not to the SAMPLE database, then this code will create the two (EMPLOYEE, DEPARTMENT) tables that are used by most examples. If the function finds that these tables already exist, then nothing is done. If the tables are missing then they will be created with the same data as in the SAMPLE database.The quiet flag tells the program not to print any messages when the creation of the tables is complete.
###Code
def db2_create_sample(quiet):
create_department = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='DEPARTMENT' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE DEPARTMENT(DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6),ADMRDEPT CHAR(3) NOT NULL)');
EXECUTE IMMEDIATE('INSERT INTO DEPARTMENT VALUES
(''A00'',''SPIFFY COMPUTER SERVICE DIV.'',''000010'',''A00''),
(''B01'',''PLANNING'',''000020'',''A00''),
(''C01'',''INFORMATION CENTER'',''000030'',''A00''),
(''D01'',''DEVELOPMENT CENTER'',NULL,''A00''),
(''D11'',''MANUFACTURING SYSTEMS'',''000060'',''D01''),
(''D21'',''ADMINISTRATION SYSTEMS'',''000070'',''D01''),
(''E01'',''SUPPORT SERVICES'',''000050'',''A00''),
(''E11'',''OPERATIONS'',''000090'',''E01''),
(''E21'',''SOFTWARE SUPPORT'',''000100'',''E01''),
(''F22'',''BRANCH OFFICE F2'',NULL,''E01''),
(''G22'',''BRANCH OFFICE G2'',NULL,''E01''),
(''H22'',''BRANCH OFFICE H2'',NULL,''E01''),
(''I22'',''BRANCH OFFICE I2'',NULL,''E01''),
(''J22'',''BRANCH OFFICE J2'',NULL,''E01'')');
END IF;
END"""
%sql -d -q {create_department}
create_employee = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='EMPLOYEE' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE EMPLOYEE(
EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1),
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3),
PHONENO CHAR(4),
HIREDATE DATE,
JOB CHAR(8),
EDLEVEL SMALLINT NOT NULL,
SEX CHAR(1),
BIRTHDATE DATE,
SALARY DECIMAL(9,2),
BONUS DECIMAL(9,2),
COMM DECIMAL(9,2)
)');
EXECUTE IMMEDIATE('INSERT INTO EMPLOYEE VALUES
(''000010'',''CHRISTINE'',''I'',''HAAS'' ,''A00'',''3978'',''1995-01-01'',''PRES '',18,''F'',''1963-08-24'',152750.00,1000.00,4220.00),
(''000020'',''MICHAEL'' ,''L'',''THOMPSON'' ,''B01'',''3476'',''2003-10-10'',''MANAGER '',18,''M'',''1978-02-02'',94250.00,800.00,3300.00),
(''000030'',''SALLY'' ,''A'',''KWAN'' ,''C01'',''4738'',''2005-04-05'',''MANAGER '',20,''F'',''1971-05-11'',98250.00,800.00,3060.00),
(''000050'',''JOHN'' ,''B'',''GEYER'' ,''E01'',''6789'',''1979-08-17'',''MANAGER '',16,''M'',''1955-09-15'',80175.00,800.00,3214.00),
(''000060'',''IRVING'' ,''F'',''STERN'' ,''D11'',''6423'',''2003-09-14'',''MANAGER '',16,''M'',''1975-07-07'',72250.00,500.00,2580.00),
(''000070'',''EVA'' ,''D'',''PULASKI'' ,''D21'',''7831'',''2005-09-30'',''MANAGER '',16,''F'',''2003-05-26'',96170.00,700.00,2893.00),
(''000090'',''EILEEN'' ,''W'',''HENDERSON'' ,''E11'',''5498'',''2000-08-15'',''MANAGER '',16,''F'',''1971-05-15'',89750.00,600.00,2380.00),
(''000100'',''THEODORE'' ,''Q'',''SPENSER'' ,''E21'',''0972'',''2000-06-19'',''MANAGER '',14,''M'',''1980-12-18'',86150.00,500.00,2092.00),
(''000110'',''VINCENZO'' ,''G'',''LUCCHESSI'' ,''A00'',''3490'',''1988-05-16'',''SALESREP'',19,''M'',''1959-11-05'',66500.00,900.00,3720.00),
(''000120'',''SEAN'' ,'' '',''O`CONNELL'' ,''A00'',''2167'',''1993-12-05'',''CLERK '',14,''M'',''1972-10-18'',49250.00,600.00,2340.00),
(''000130'',''DELORES'' ,''M'',''QUINTANA'' ,''C01'',''4578'',''2001-07-28'',''ANALYST '',16,''F'',''1955-09-15'',73800.00,500.00,1904.00),
(''000140'',''HEATHER'' ,''A'',''NICHOLLS'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''000150'',''BRUCE'' ,'' '',''ADAMSON'' ,''D11'',''4510'',''2002-02-12'',''DESIGNER'',16,''M'',''1977-05-17'',55280.00,500.00,2022.00),
(''000160'',''ELIZABETH'',''R'',''PIANKA'' ,''D11'',''3782'',''2006-10-11'',''DESIGNER'',17,''F'',''1980-04-12'',62250.00,400.00,1780.00),
(''000170'',''MASATOSHI'',''J'',''YOSHIMURA'' ,''D11'',''2890'',''1999-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',44680.00,500.00,1974.00),
(''000180'',''MARILYN'' ,''S'',''SCOUTTEN'' ,''D11'',''1682'',''2003-07-07'',''DESIGNER'',17,''F'',''1979-02-21'',51340.00,500.00,1707.00),
(''000190'',''JAMES'' ,''H'',''WALKER'' ,''D11'',''2986'',''2004-07-26'',''DESIGNER'',16,''M'',''1982-06-25'',50450.00,400.00,1636.00),
(''000200'',''DAVID'' ,'' '',''BROWN'' ,''D11'',''4501'',''2002-03-03'',''DESIGNER'',16,''M'',''1971-05-29'',57740.00,600.00,2217.00),
(''000210'',''WILLIAM'' ,''T'',''JONES'' ,''D11'',''0942'',''1998-04-11'',''DESIGNER'',17,''M'',''2003-02-23'',68270.00,400.00,1462.00),
(''000220'',''JENNIFER'' ,''K'',''LUTZ'' ,''D11'',''0672'',''1998-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',49840.00,600.00,2387.00),
(''000230'',''JAMES'' ,''J'',''JEFFERSON'' ,''D21'',''2094'',''1996-11-21'',''CLERK '',14,''M'',''1980-05-30'',42180.00,400.00,1774.00),
(''000240'',''SALVATORE'',''M'',''MARINO'' ,''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''2002-03-31'',48760.00,600.00,2301.00),
(''000250'',''DANIEL'' ,''S'',''SMITH'' ,''D21'',''0961'',''1999-10-30'',''CLERK '',15,''M'',''1969-11-12'',49180.00,400.00,1534.00),
(''000260'',''SYBIL'' ,''P'',''JOHNSON'' ,''D21'',''8953'',''2005-09-11'',''CLERK '',16,''F'',''1976-10-05'',47250.00,300.00,1380.00),
(''000270'',''MARIA'' ,''L'',''PEREZ'' ,''D21'',''9001'',''2006-09-30'',''CLERK '',15,''F'',''2003-05-26'',37380.00,500.00,2190.00),
(''000280'',''ETHEL'' ,''R'',''SCHNEIDER'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1976-03-28'',36250.00,500.00,2100.00),
(''000290'',''JOHN'' ,''R'',''PARKER'' ,''E11'',''4502'',''2006-05-30'',''OPERATOR'',12,''M'',''1985-07-09'',35340.00,300.00,1227.00),
(''000300'',''PHILIP'' ,''X'',''SMITH'' ,''E11'',''2095'',''2002-06-19'',''OPERATOR'',14,''M'',''1976-10-27'',37750.00,400.00,1420.00),
(''000310'',''MAUDE'' ,''F'',''SETRIGHT'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''000320'',''RAMLAL'' ,''V'',''MEHTA'' ,''E21'',''9990'',''1995-07-07'',''FIELDREP'',16,''M'',''1962-08-11'',39950.00,400.00,1596.00),
(''000330'',''WING'' ,'' '',''LEE'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''M'',''1971-07-18'',45370.00,500.00,2030.00),
(''000340'',''JASON'' ,''R'',''GOUNOT'' ,''E21'',''5698'',''1977-05-05'',''FIELDREP'',16,''M'',''1956-05-17'',43840.00,500.00,1907.00),
(''200010'',''DIAN'' ,''J'',''HEMMINGER'' ,''A00'',''3978'',''1995-01-01'',''SALESREP'',18,''F'',''1973-08-14'',46500.00,1000.00,4220.00),
(''200120'',''GREG'' ,'' '',''ORLANDO'' ,''A00'',''2167'',''2002-05-05'',''CLERK '',14,''M'',''1972-10-18'',39250.00,600.00,2340.00),
(''200140'',''KIM'' ,''N'',''NATZ'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''200170'',''KIYOSHI'' ,'' '',''YAMAMOTO'' ,''D11'',''2890'',''2005-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',64680.00,500.00,1974.00),
(''200220'',''REBA'' ,''K'',''JOHN'' ,''D11'',''0672'',''2005-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',69840.00,600.00,2387.00),
(''200240'',''ROBERT'' ,''M'',''MONTEVERDE'',''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''1984-03-31'',37760.00,600.00,2301.00),
(''200280'',''EILEEN'' ,''R'',''SCHWARTZ'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1966-03-28'',46250.00,500.00,2100.00),
(''200310'',''MICHELLE'' ,''F'',''SPRINGER'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''200330'',''HELENA'' ,'' '',''WONG'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''F'',''1971-07-18'',35370.00,500.00,2030.00),
(''200340'',''ROY'' ,''R'',''ALONZO'' ,''E21'',''5698'',''1997-07-05'',''FIELDREP'',16,''M'',''1956-05-17'',31840.00,500.00,1907.00)');
END IF;
END"""
%sql -d -q {create_employee}
if (quiet == False): success("Sample tables [EMPLOYEE, DEPARTMENT] created.")
###Output
_____no_output_____
###Markdown
Check optionThis function will return the original string with the option removed, and a flag or true or false of the value is found.```args, flag = checkOption(option_string, option, false_value, true_value)```Options are specified with a -x where x is the character that we are searching for. It may actually be more than one character long like -pb/-pi/etc... The false and true values are optional. By default these are the boolean values of T/F but for some options it could be a character string like ';' versus '@' for delimiters.
###Code
def checkOption(args_in, option, vFalse=False, vTrue=True):
args_out = args_in.strip()
found = vFalse
if (args_out != ""):
if (args_out.find(option) >= 0):
args_out = args_out.replace(option," ")
args_out = args_out.strip()
found = vTrue
return args_out, found
###Output
_____no_output_____
###Markdown
Plot DataThis function will plot the data that is returned from the answer set. The plot value determines how we display the data. 1=Bar, 2=Pie, 3=Line, 4=Interactive.```plotData(flag_plot, hdbi, sql, parms)```The hdbi is the ibm_db_sa handle that is used by pandas dataframes to run the sql. The parms contains any of the parameters required to run the query.
###Code
def plotData(hdbi, sql):
try:
df = pandas.read_sql(sql,hdbi)
except Exception as err:
db2_error(False)
return
if df.empty:
errormsg("No results returned")
return
col_count = len(df.columns)
if flag(["-pb","-bar"]): # Plot 1 = bar chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='bar');
_ = plt.plot();
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
df.plot(kind='bar',x=xlabel,y=ylabel);
_ = plt.plot();
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot.bar();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pp","-pie"]): # Plot 2 = pie chart
if (col_count in (1,2)):
if (col_count == 1):
df.index = df.index + 1
yname = df.columns.values[0]
_ = df.plot(kind='pie',y=yname);
else:
xlabel = df.columns.values[0]
xname = df[xlabel].tolist()
yname = df.columns.values[1]
_ = df.plot(kind='pie',y=yname,labels=xname);
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pl","-line"]): # Plot 3 = line chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='line');
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
_ = df.plot(kind='line',x=xlabel,y=ylabel) ;
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot();
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
else:
return
###Output
_____no_output_____
###Markdown
Find a ProcedureThis routine will check to see if a procedure exists with the SCHEMA/NAME (or just NAME if no schema is supplied) and returns the number of answer sets returned. Possible values are 0, 1 (or greater) or None. If None is returned then we can't find the procedure anywhere.
###Code
def findProc(procname):
global _hdbc, _hdbi, _connected, _runtime
# Split the procedure name into schema.procname if appropriate
upper_procname = procname.upper()
schema, proc = split_string(upper_procname,".") # Expect schema.procname
if (proc == None):
proc = schema
# Call ibm_db.procedures to see if the procedure does exist
schema = "%"
try:
stmt = ibm_db.procedures(_hdbc, None, schema, proc)
if (stmt == False): # Error executing the code
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
result = ibm_db.fetch_tuple(stmt)
resultsets = result[5]
if (resultsets >= 1): resultsets = 1
return resultsets
except Exception as err:
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
###Output
_____no_output_____
###Markdown
Parse Call ArgumentsThis code will parse a SQL call name(parm1,...) and return the name and the parameters in the call.
###Code
def parseCallArgs(macro):
quoteChar = ""
inQuote = False
inParm = False
ignore = False
name = ""
parms = []
parm = ''
sqlin = macro.replace("\n","")
sqlin.lstrip()
for ch in sqlin:
if (inParm == False):
# We hit a blank in the name, so ignore everything after the procedure name until a ( is found
if (ch == " "):
ignore == True
elif (ch == "("): # Now we have parameters to send to the stored procedure
inParm = True
else:
if (ignore == False): name = name + ch # The name of the procedure (and no blanks)
else:
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
else:
parm = parm + ch
elif (ch in ("\"","\'","[")): # Do we have a quote
if (ch == "["):
quoteChar = "]"
else:
quoteChar = ch
inQuote = True
elif (ch == ")"):
if (parm != ""):
parms.append(parm)
parm = ""
break
elif (ch == ","):
if (parm != ""):
parms.append(parm)
else:
parms.append("null")
parm = ""
else:
parm = parm + ch
if (inParm == True):
if (parm != ""):
parms.append(parm_value)
return(name,parms)
###Output
_____no_output_____
###Markdown
Get ColumnsGiven a statement handle, determine what the column names are or the data types.
###Code
def getColumns(stmt):
columns = []
types = []
colcount = 0
try:
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
while (colname != False):
columns.append(colname)
types.append(coltype)
colcount += 1
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
return columns,types
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Call a ProcedureThe CALL statement is used for execution of a stored procedure. The format of the CALL statement is:```CALL PROC_NAME(x,y,z,...)```Procedures allow for the return of answer sets (cursors) as well as changing the contents of the parameters being passed to the procedure. In this implementation, the CALL function is limited to returning one answer set (or nothing). If you want to use more complex stored procedures then you will have to use the native python libraries.
###Code
def parseCall(hdbc, inSQL, local_ns):
global _hdbc, _hdbi, _connected, _runtime, _environment
# Check to see if we are connected first
if (_connected == False): # Check if you are connected
db2_doConnect()
if _connected == False: return None
remainder = inSQL.strip()
procName, procArgs = parseCallArgs(remainder[5:]) # Assume that CALL ... is the format
resultsets = findProc(procName)
if (resultsets == None): return None
argvalues = []
if (len(procArgs) > 0): # We have arguments to consider
for arg in procArgs:
varname = arg
if (len(varname) > 0):
if (varname[0] == ":"):
checkvar = varname[1:]
varvalue = getContents(checkvar,True,local_ns)
if (varvalue == None):
errormsg("Variable " + checkvar + " is not defined.")
return None
argvalues.append(varvalue)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
else:
argvalues.append(None)
try:
if (len(procArgs) > 0):
argtuple = tuple(argvalues)
result = ibm_db.callproc(_hdbc,procName,argtuple)
stmt = result[0]
else:
result = ibm_db.callproc(_hdbc,procName)
stmt = result
if (resultsets != 0 and stmt != None):
columns, types = getColumns(stmt)
if (columns == None): return None
rows = []
rowlist = ibm_db.fetch_tuple(stmt)
while ( rowlist ) :
row = []
colcount = 0
for col in rowlist:
try:
if (types[colcount] in ["int","bigint"]):
row.append(int(col))
elif (types[colcount] in ["decimal","real"]):
row.append(float(col))
elif (types[colcount] in ["date","time","timestamp"]):
row.append(str(col))
else:
row.append(col)
except:
row.append(col)
colcount += 1
rows.append(row)
rowlist = ibm_db.fetch_tuple(stmt)
if flag(["-r","-array"]):
rows.insert(0,columns)
if len(procArgs) > 0:
allresults = []
allresults.append(rows)
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return rows
else:
df = pandas.DataFrame.from_records(rows,columns=columns)
if flag("-grid") or _settings['display'] == 'GRID':
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
return df
else:
if len(procArgs) > 0:
allresults = []
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return None
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Parse Prepare/ExecuteThe PREPARE statement is used for repeated execution of a SQL statement. The PREPARE statement has the format:```stmt = PREPARE SELECT EMPNO FROM EMPLOYEE WHERE WORKDEPT=? AND SALARY<?```The SQL statement that you want executed is placed after the PREPARE statement with the location of variables marked with ? (parameter) markers. The variable stmt contains the prepared statement that need to be passed to the EXECUTE statement. The EXECUTE statement has the format:```EXECUTE :x USING z, y, s ```The first variable (:x) is the name of the variable that you assigned the results of the prepare statement. The values after the USING clause are substituted into the prepare statement where the ? markers are found. If the values in USING clause are variable names (z, y, s), a **link** is created to these variables as part of the execute statement. If you use the variable subsitution form of variable name (:z, :y, :s), the **contents** of the variable are placed into the USING clause. Normally this would not make much of a difference except when you are dealing with binary strings or JSON strings where the quote characters may cause some problems when subsituted into the statement.
###Code
def parsePExec(hdbc, inSQL):
import ibm_db
global _stmt, _stmtID, _stmtSQL, sqlcode
cParms = inSQL.split()
parmCount = len(cParms)
if (parmCount == 0): return(None) # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "PREPARE"): # Prepare the following SQL
uSQL = inSQL.upper()
found = uSQL.find("PREPARE")
sql = inSQL[found+7:].strip()
try:
pattern = "\?\*[0-9]+"
findparm = re.search(pattern,sql)
while findparm != None:
found = findparm.group(0)
count = int(found[2:])
markers = ('?,' * count)[:-1]
sql = sql.replace(found,markers)
findparm = re.search(pattern,sql)
stmt = ibm_db.prepare(hdbc,sql) # Check error code here
if (stmt == False):
db2_error(False)
return(False)
stmttext = str(stmt).strip()
stmtID = stmttext[33:48].strip()
if (stmtID in _stmtID) == False:
_stmt.append(stmt) # Prepare and return STMT to caller
_stmtID.append(stmtID)
else:
stmtIX = _stmtID.index(stmtID)
_stmt[stmtiX] = stmt
return(stmtID)
except Exception as err:
print(err)
db2_error(False)
return(False)
if (keyword == "EXECUTE"): # Execute the prepare statement
if (parmCount < 2): return(False) # No stmtID available
stmtID = cParms[1].strip()
if (stmtID in _stmtID) == False:
errormsg("Prepared statement not found or invalid.")
return(False)
stmtIX = _stmtID.index(stmtID)
stmt = _stmt[stmtIX]
try:
if (parmCount == 2): # Only the statement handle available
result = ibm_db.execute(stmt) # Run it
elif (parmCount == 3): # Not quite enough arguments
errormsg("Missing or invalid USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
else:
using = cParms[2].upper()
if (using != "USING"): # Bad syntax again
errormsg("Missing USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
uSQL = inSQL.upper()
found = uSQL.find("USING")
parmString = inSQL[found+5:].strip()
parmset = splitargs(parmString)
if (len(parmset) == 0):
errormsg("Missing parameters after the USING clause.")
sqlcode = -99999
return(False)
parms = []
parm_count = 0
CONSTANT = 0
VARIABLE = 1
const = [0]
const_cnt = 0
for v in parmset:
parm_count = parm_count + 1
if (v[1] == True or v[2] == True): # v[1] true if string, v[2] true if num
parm_type = CONSTANT
const_cnt = const_cnt + 1
if (v[2] == True):
if (isinstance(v[0],int) == True): # Integer value
sql_type = ibm_db.SQL_INTEGER
elif (isinstance(v[0],float) == True): # Float value
sql_type = ibm_db.SQL_DOUBLE
else:
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
const.append(v[0])
else:
parm_type = VARIABLE
# See if the variable has a type associated with it varname@type
varset = v[0].split("@")
parm_name = varset[0]
parm_datatype = "char"
# Does the variable exist?
if (parm_name not in globals()):
errormsg("SQL Execute parameter " + parm_name + " not found")
sqlcode = -99999
return(false)
if (len(varset) > 1): # Type provided
parm_datatype = varset[1]
if (parm_datatype == "dec" or parm_datatype == "decimal"):
sql_type = ibm_db.SQL_DOUBLE
elif (parm_datatype == "bin" or parm_datatype == "binary"):
sql_type = ibm_db.SQL_BINARY
elif (parm_datatype == "int" or parm_datatype == "integer"):
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
try:
if (parm_type == VARIABLE):
result = ibm_db.bind_param(stmt, parm_count, globals()[parm_name], ibm_db.SQL_PARAM_INPUT, sql_type)
else:
result = ibm_db.bind_param(stmt, parm_count, const[const_cnt], ibm_db.SQL_PARAM_INPUT, sql_type)
except:
result = False
if (result == False):
errormsg("SQL Bind on variable " + parm_name + " failed.")
sqlcode = -99999
return(false)
result = ibm_db.execute(stmt) # ,tuple(parms))
if (result == False):
errormsg("SQL Execute failed.")
return(False)
if (ibm_db.num_fields(stmt) == 0): return(True) # Command successfully completed
return(fetchResults(stmt))
except Exception as err:
db2_error(False)
return(False)
return(False)
return(False)
###Output
_____no_output_____
###Markdown
Fetch Result SetThis code will take the stmt handle and then produce a result set of rows as either an array (`-r`,`-array`) or as an array of json records (`-json`).
###Code
def fetchResults(stmt):
global sqlcode
rows = []
columns, types = getColumns(stmt)
# By default we assume that the data will be an array
is_array = True
# Check what type of data we want returned - array or json
if (flag(["-r","-array"]) == False):
# See if we want it in JSON format, if not it remains as an array
if (flag("-json") == True):
is_array = False
# Set column names to lowercase for JSON records
if (is_array == False):
columns = [col.lower() for col in columns] # Convert to lowercase for each of access
# First row of an array has the column names in it
if (is_array == True):
rows.append(columns)
result = ibm_db.fetch_tuple(stmt)
rowcount = 0
while (result):
rowcount += 1
if (is_array == True):
row = []
else:
row = {}
colcount = 0
for col in result:
try:
if (types[colcount] in ["int","bigint"]):
if (is_array == True):
row.append(int(col))
else:
row[columns[colcount]] = int(col)
elif (types[colcount] in ["decimal","real"]):
if (is_array == True):
row.append(float(col))
else:
row[columns[colcount]] = float(col)
elif (types[colcount] in ["date","time","timestamp"]):
if (is_array == True):
row.append(str(col))
else:
row[columns[colcount]] = str(col)
else:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
except:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
colcount += 1
rows.append(row)
result = ibm_db.fetch_tuple(stmt)
if (rowcount == 0):
sqlcode = 100
else:
sqlcode = 0
return rows
###Output
_____no_output_____
###Markdown
Parse CommitThere are three possible COMMIT verbs that can bs used:- COMMIT [WORK] - Commit the work in progress - The WORK keyword is not checked for- ROLLBACK - Roll back the unit of work- AUTOCOMMIT ON/OFF - Are statements committed on or off?The statement is passed to this routine and then checked.
###Code
def parseCommit(sql):
global _hdbc, _hdbi, _connected, _runtime, _stmt, _stmtID, _stmtSQL
if (_connected == False): return # Nothing to do if we are not connected
cParms = sql.split()
if (len(cParms) == 0): return # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "COMMIT"): # Commit the work that was done
try:
result = ibm_db.commit (_hdbc) # Commit the connection
if (len(cParms) > 1):
keyword = cParms[1].upper()
if (keyword == "HOLD"):
return
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "ROLLBACK"): # Rollback the work that was done
try:
result = ibm_db.rollback(_hdbc) # Rollback the connection
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "AUTOCOMMIT"): # Is autocommit on or off
if (len(cParms) > 1):
op = cParms[1].upper() # Need ON or OFF value
else:
return
try:
if (op == "OFF"):
ibm_db.autocommit(_hdbc, False)
elif (op == "ON"):
ibm_db.autocommit (_hdbc, True)
return
except Exception as err:
db2_error(False)
return
return
###Output
_____no_output_____
###Markdown
Set FlagsThis code will take the input SQL block and update the global flag list. The global flag list is just a list of options that are set at the beginning of a code block. The absence of a flag means it is false. If it exists it is true.
###Code
def setFlags(inSQL):
global _flags
_flags = [] # Delete all of the current flag settings
pos = 0
end = len(inSQL)-1
inFlag = False
ignore = False
outSQL = ""
flag = ""
while (pos <= end):
ch = inSQL[pos]
if (ignore == True):
outSQL = outSQL + ch
else:
if (inFlag == True):
if (ch != " "):
flag = flag + ch
else:
_flags.append(flag)
inFlag = False
else:
if (ch == "-"):
flag = "-"
inFlag = True
elif (ch == ' '):
outSQL = outSQL + ch
else:
outSQL = outSQL + ch
ignore = True
pos += 1
if (inFlag == True):
_flags.append(flag)
return outSQL
###Output
_____no_output_____
###Markdown
Check to see if flag ExistsThis function determines whether or not a flag exists in the global flag array. Absence of a value means it is false. The parameter can be a single value, or an array of values.
###Code
def flag(inflag):
global _flags
if isinstance(inflag,list):
for x in inflag:
if (x in _flags):
return True
return False
else:
if (inflag in _flags):
return True
else:
return False
###Output
_____no_output_____
###Markdown
Generate a list of SQL lines based on a delimiterNote that this function will make sure that quotes are properly maintained so that delimiters inside of quoted strings do not cause errors.
###Code
def splitSQL(inputString, delimiter):
pos = 0
arg = ""
results = []
quoteCH = ""
inSQL = inputString.strip()
if (len(inSQL) == 0): return(results) # Not much to do here - no args found
while pos < len(inSQL):
ch = inSQL[pos]
pos += 1
if (ch in ('"',"'")): # Is this a quote characters?
arg = arg + ch # Keep appending the characters to the current arg
if (ch == quoteCH): # Is this quote character we are in
quoteCH = ""
elif (quoteCH == ""): # Create the quote
quoteCH = ch
else:
None
elif (quoteCH != ""): # Still in a quote
arg = arg + ch
elif (ch == delimiter): # Is there a delimiter?
results.append(arg)
arg = ""
else:
arg = arg + ch
if (arg != ""):
results.append(arg)
return(results)
###Output
_____no_output_____
###Markdown
Main %sql Magic DefinitionThe main %sql Magic logic is found in this section of code. This code will register the Magic command and allow Jupyter notebooks to interact with Db2 by using this extension.
###Code
@magics_class
class DB2(Magics):
@needs_local_scope
@line_cell_magic
def sql(self, line, cell=None, local_ns=None):
# Before we event get started, check to see if you have connected yet. Without a connection we
# can't do anything. You may have a connection request in the code, so if that is true, we run those,
# otherwise we connect immediately
# If your statement is not a connect, and you haven't connected, we need to do it for you
global _settings, _environment
global _hdbc, _hdbi, _connected, _runtime, sqlstate, sqlerror, sqlcode, sqlelapsed
# If you use %sql (line) we just run the SQL. If you use %%SQL the entire cell is run.
flag_cell = False
flag_output = False
sqlstate = "0"
sqlerror = ""
sqlcode = 0
sqlelapsed = 0
start_time = time.time()
end_time = time.time()
# Macros gets expanded before anything is done
SQL1 = setFlags(line.strip())
SQL1 = checkMacro(SQL1) # Update the SQL if any macros are in there
SQL2 = cell
if flag("-sampledata"): # Check if you only want sample data loaded
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
db2_create_sample(flag(["-q","-quiet"]))
return
if SQL1 == "?" or flag(["-h","-help"]): # Are you asking for help
sqlhelp()
return
if len(SQL1) == 0 and SQL2 == None: return # Nothing to do here
# Check for help
if SQL1.upper() == "? CONNECT": # Are you asking for help on CONNECT
connected_help()
return
sqlType,remainder = sqlParser(SQL1,local_ns) # What type of command do you have?
if (sqlType == "CONNECT"): # A connect request
parseConnect(SQL1,local_ns)
return
elif (sqlType == "USING"): # You want to use a dataframe to create a table?
createDF(_hdbc,SQL1,local_ns)
return
elif (sqlType == "DEFINE"): # Create a macro from the body
result = setMacro(SQL2,remainder)
return
elif (sqlType == "OPTION"):
setOptions(SQL1)
return
elif (sqlType == 'COMMIT' or sqlType == 'ROLLBACK' or sqlType == 'AUTOCOMMIT'):
parseCommit(remainder)
return
elif (sqlType == "PREPARE"):
pstmt = parsePExec(_hdbc, remainder)
return(pstmt)
elif (sqlType == "EXECUTE"):
result = parsePExec(_hdbc, remainder)
return(result)
elif (sqlType == "CALL"):
result = parseCall(_hdbc, remainder, local_ns)
return(result)
else:
pass
sql = SQL1
if (sql == ""): sql = SQL2
if (sql == ""): return # Nothing to do here
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
if _settings["maxrows"] == -1: # Set the return result size
pandas.reset_option('display.max_rows')
else:
pandas.options.display.max_rows = _settings["maxrows"]
runSQL = re.sub('.*?--.*$',"",sql,flags=re.M)
remainder = runSQL.replace("\n"," ")
if flag(["-d","-delim"]):
sqlLines = splitSQL(remainder,"@")
else:
sqlLines = splitSQL(remainder,";")
flag_cell = True
# For each line figure out if you run it as a command (db2) or select (sql)
for sqlin in sqlLines: # Run each command
sqlin = checkMacro(sqlin) # Update based on any macros
sqlType, sql = sqlParser(sqlin,local_ns) # Parse the SQL
if (sql.strip() == ""): continue
if flag(["-e","-echo"]): debug(sql,False)
if flag("-t"):
cnt = sqlTimer(_hdbc, _settings["runtime"], sql) # Given the sql and parameters, clock the time
if (cnt >= 0): print("Total iterations in %s second(s): %s" % (_settings["runtime"],cnt))
return(cnt)
elif flag(["-pb","-bar","-pp","-pie","-pl","-line"]): # We are plotting some results
plotData(_hdbi, sql) # Plot the data and return
return
else:
try: # See if we have an answer set
stmt = ibm_db.prepare(_hdbc,sql)
if (ibm_db.num_fields(stmt) == 0): # No, so we just execute the code
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
continue
rowcount = ibm_db.num_rows(stmt)
if (rowcount == 0 and flag(["-q","-quiet"]) == False):
errormsg("No rows found.")
continue # Continue running
elif flag(["-r","-array","-j","-json"]): # raw, json, format json
row_count = 0
resultSet = []
try:
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
return
if flag("-j"): # JSON single output
row_count = 0
json_results = []
while( ibm_db.fetch_row(stmt) ):
row_count = row_count + 1
jsonVal = ibm_db.result(stmt,0)
jsonDict = json.loads(jsonVal)
json_results.append(jsonDict)
flag_output = True
if (row_count == 0): sqlcode = 100
return(json_results)
else:
return(fetchResults(stmt))
except Exception as err:
db2_error(flag(["-q","-quiet"]))
return
else:
try:
df = pandas.read_sql(sql,_hdbi)
except Exception as err:
db2_error(False)
return
if (len(df) == 0):
sqlcode = 100
if (flag(["-q","-quiet"]) == False):
errormsg("No rows found")
continue
flag_output = True
if flag("-grid") or _settings['display'] == 'GRID': # Check to see if we can display the results
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
print(df.to_string())
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
pandas.options.display.max_rows = None
pandas.options.display.max_columns = None
return df # print(df.to_string())
else:
pandas.options.display.max_rows = _settings["maxrows"]
pandas.options.display.max_columns = None
return df # pdisplay(df) # print(df.to_string())
except:
db2_error(flag(["-q","-quiet"]))
continue # return
end_time = time.time()
sqlelapsed = end_time - start_time
if (flag_output == False and flag(["-q","-quiet"]) == False): print("Command completed.")
# Register the Magic extension in Jupyter
ip = get_ipython()
ip.register_magics(DB2)
load_settings()
success("Db2 Extensions Loaded.")
###Output
_____no_output_____
###Markdown
Pre-defined MacrosThese macros are used to simulate the LIST TABLES and DESCRIBE commands that are available from within the Db2 command line.
###Code
%%sql define LIST
#
# The LIST macro is used to list all of the tables in the current schema or for all schemas
#
var syntax Syntax: LIST TABLES [FOR ALL | FOR SCHEMA name]
#
# Only LIST TABLES is supported by this macro
#
if {^1} <> 'TABLES'
exit {syntax}
endif
#
# This SQL is a temporary table that contains the description of the different table types
#
WITH TYPES(TYPE,DESCRIPTION) AS (
VALUES
('A','Alias'),
('G','Created temporary table'),
('H','Hierarchy table'),
('L','Detached table'),
('N','Nickname'),
('S','Materialized query table'),
('T','Table'),
('U','Typed table'),
('V','View'),
('W','Typed view')
)
SELECT TABNAME, TABSCHEMA, T.DESCRIPTION FROM SYSCAT.TABLES S, TYPES T
WHERE T.TYPE = S.TYPE
#
# Case 1: No arguments - LIST TABLES
#
if {argc} == 1
AND OWNER = CURRENT USER
ORDER BY TABNAME, TABSCHEMA
return
endif
#
# Case 2: Need 3 arguments - LIST TABLES FOR ALL
#
if {argc} == 3
if {^2}&{^3} == 'FOR&ALL'
ORDER BY TABNAME, TABSCHEMA
return
endif
exit {syntax}
endif
#
# Case 3: Need FOR SCHEMA something here
#
if {argc} == 4
if {^2}&{^3} == 'FOR&SCHEMA'
AND TABSCHEMA = '{^4}'
ORDER BY TABNAME, TABSCHEMA
return
else
exit {syntax}
endif
endif
#
# Nothing matched - Error
#
exit {syntax}
%%sql define describe
#
# The DESCRIBE command can either use the syntax DESCRIBE TABLE <name> or DESCRIBE TABLE SELECT ...
#
var syntax Syntax: DESCRIBE [TABLE name | SELECT statement]
#
# Check to see what count of variables is... Must be at least 2 items DESCRIBE TABLE x or SELECT x
#
if {argc} < 2
exit {syntax}
endif
CALL ADMIN_CMD('{*0}');
###Output
_____no_output_____
###Markdown
Set the table formatting to left align a table in a cell. By default, tables are centered in a cell. Remove this cell if you don't want to change Jupyter notebook formatting for tables. In addition, we skip this code if you are running in a shell environment rather than a Jupyter notebook
###Code
#%%html
#<style>
# table {margin-left: 0 !important; text-align: left;}
#</style>
###Output
_____no_output_____
###Markdown
DB2 Jupyter Notebook ExtensionsVersion: 2021-07-13 This code is imported as a Jupyter notebook extension in any notebooks you create with DB2 code in it. Place the following line of code in any notebook that you want to use these commands with:&37;run db2.ipynbThis code defines a Jupyter/Python magic command called `%sql` which allows you to execute DB2 specific calls to the database. There are other packages available for manipulating databases, but this one has been specificallydesigned for demonstrating a number of the SQL features available in DB2.There are two ways of executing the `%sql` command. A single line SQL statement would use theline format of the magic command:%sql SELECT * FROM EMPLOYEEIf you have a large block of sql then you would place the %%sql command at the beginning of the block and thenplace the SQL statements into the remainder of the block. Using this form of the `%%sql` statement means that thenotebook cell can only contain SQL and no other statements.%%sqlSELECT * FROM EMPLOYEEORDER BY LASTNAMEYou can have multiple lines in the SQL block (`%%sql`). The default SQL delimiter is the semi-column (`;`).If you have scripts (triggers, procedures, functions) that use the semi-colon as part of the script, you will need to use the `-d` option to change the delimiter to an at "`@`" sign. %%sql -dSELECT * FROM EMPLOYEE@CREATE PROCEDURE ...@The `%sql` command allows most DB2 commands to execute and has a special version of the CONNECT statement. A CONNECT by itself will attempt to reconnect to the database using previously used settings. If it cannot connect, it will prompt the user for additional information. The CONNECT command has the following format:%sql CONNECT TO <database> USER <userid> USING <password | ?> HOST <ip address> PORT <port number>If you use a "`?`" for the password field, the system will prompt you for a password. This avoids typing the password as clear text on the screen. If a connection is not successful, the system will print the errormessage associated with the connect request.If the connection is successful, the parameters are saved on your system and will be used the next time yourun a SQL statement, or when you issue the %sql CONNECT command with no parameters. In addition to the -d option, there are a number different options that you can specify at the beginning of the SQL: - `-d, -delim` - Change SQL delimiter to "`@`" from "`;`" - `-q, -quiet` - Quiet results - no messages returned from the function - `-r, -array` - Return the result set as an array of values instead of a dataframe - `-t, -time` - Time the following SQL statement and return the number of times it executes in 1 second - `-j` - Format the first character column of the result set as a JSON record - `-json` - Return result set as an array of json records - `-a, -all` - Return all rows in answer set and do not limit display - `-grid` - Display the results in a scrollable grid - `-pb, -bar` - Plot the results as a bar chart - `-pl, -line` - Plot the results as a line chart - `-pp, -pie` - Plot the results as a pie chart - `-e, -echo` - Any macro expansions are displayed in an output box - `-sampledata` - Create and load the EMPLOYEE and DEPARTMENT tablesYou can pass python variables to the `%sql` command by using the `{}` braces with the name of thevariable inbetween. Note that you will need to place proper punctuation around the variable in the event theSQL command requires it. For instance, the following example will find employee '000010' in the EMPLOYEE table.empno = '000010'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO='{empno}'The other option is to use parameter markers. What you would need to do is use the name of the variable with a colon in front of it and the program will prepare the statement and then pass the variable to Db2 when the statement is executed. This allows you to create complex strings that might contain quote characters and other special characters and not have to worry about enclosing the string with the correct quotes. Note that you do not place the quotes around the variable even though it is a string.empno = '000020'%sql SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=:empno Development SQLThe previous set of `%sql` and `%%sql` commands deals with SQL statements and commands that are run in an interactive manner. There is a class of SQL commands that are more suited to a development environment where code is iterated or requires changing input. The commands that are associated with this form of SQL are:- AUTOCOMMIT- COMMIT/ROLLBACK- PREPARE - EXECUTEAutocommit is the default manner in which SQL statements are executed. At the end of the successful completion of a statement, the results are commited to the database. There is no concept of a transaction where multiple DML/DDL statements are considered one transaction. The `AUTOCOMMIT` command allows you to turn autocommit `OFF` or `ON`. This means that the set of SQL commands run after the `AUTOCOMMIT OFF` command are executed are not commited to the database until a `COMMIT` or `ROLLBACK` command is issued.`COMMIT` (`WORK`) will finalize all of the transactions (`COMMIT`) to the database and `ROLLBACK` will undo all of the changes. If you issue a `SELECT` statement during the execution of your block, the results will reflect all of your changes. If you `ROLLBACK` the transaction, the changes will be lost.`PREPARE` is typically used in a situation where you want to repeatidly execute a SQL statement with different variables without incurring the SQL compilation overhead. For instance:```x = %sql PREPARE SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=?for y in ['000010','000020','000030']: %sql execute :x using :y````EXECUTE` is used to execute a previously compiled statement. To retrieve the error codes that might be associated with any SQL call, the following variables are updated after every call:* SQLCODE* SQLSTATE* SQLERROR - Full error message retrieved from Db2 Install Db2 Python DriverIf the ibm_db driver is not installed on your system, the subsequent Db2 commands will fail. In order to install the Db2 driver, issue the following command from a Jupyter notebook cell:```!pip install --user ibm_db``` Db2 Jupyter ExtensionsThis section of code has the import statements and global variables defined for the remainder of the functions.
###Code
#
# Set up Jupyter MAGIC commands "sql".
# %sql will return results from a DB2 select statement or execute a DB2 command
#
# IBM 2021: George Baklarz
# Version 2021-07-13
#
from __future__ import print_function
from IPython.display import HTML as pHTML, Image as pImage, display as pdisplay, Javascript as Javascript
from IPython.core.magic import (Magics, magics_class, line_magic,
cell_magic, line_cell_magic, needs_local_scope)
import ibm_db
import pandas
import ibm_db_dbi
import json
import matplotlib
import matplotlib.pyplot as plt
import getpass
import os
import pickle
import time
import sys
import re
import warnings
warnings.filterwarnings("ignore")
# Python Hack for Input between 2 and 3
try:
input = raw_input
except NameError:
pass
_settings = {
"maxrows" : 10,
"maxgrid" : 5,
"runtime" : 1,
"display" : "PANDAS",
"database" : "",
"hostname" : "localhost",
"port" : "50000",
"protocol" : "TCPIP",
"uid" : "DB2INST1",
"pwd" : "password",
"ssl" : ""
}
_environment = {
"jupyter" : True,
"qgrid" : True
}
_display = {
'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': False,
'defaultColumnWidth': 150,
'rowHeight': 28,
'enableColumnReorder': False,
'enableTextSelectionOnCells': True,
'editable': False,
'autoEdit': False,
'explicitInitialization': True,
'maxVisibleRows': 5,
'minVisibleRows': 5,
'sortable': True,
'filterable': False,
'highlightSelectedCell': False,
'highlightSelectedRow': True
}
# Connection settings for statements
_connected = False
_hdbc = None
_hdbi = None
_stmt = []
_stmtID = []
_stmtSQL = []
_vars = {}
_macros = {}
_flags = []
_debug = False
# Db2 Error Messages and Codes
sqlcode = 0
sqlstate = "0"
sqlerror = ""
sqlelapsed = 0
# Check to see if QGrid is installed
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
# Check if we are running in iPython or Jupyter
try:
if (get_ipython().config == {}):
_environment['jupyter'] = False
_environment['qgrid'] = False
else:
_environment['jupyter'] = True
except:
_environment['jupyter'] = False
_environment['qgrid'] = False
###Output
_____no_output_____
###Markdown
OptionsThere are four options that can be set with the **`%sql`** command. These options are shown below with the default value shown in parenthesis.- **`MAXROWS n (10)`** - The maximum number of rows that will be displayed before summary information is shown. If the answer set is less than this number of rows, it will be completely shown on the screen. If the answer set is larger than this amount, only the first 5 rows and last 5 rows of the answer set will be displayed. If you want to display a very large answer set, you may want to consider using the grid option `-g` to display the results in a scrollable table. If you really want to show all results then setting MAXROWS to -1 will return all output.- **`MAXGRID n (5)`** - The maximum size of a grid display. When displaying a result set in a grid `-g`, the default size of the display window is 5 rows. You can set this to a larger size so that more rows are shown on the screen. Note that the minimum size always remains at 5 which means that if the system is unable to display your maximum row size it will reduce the table display until it fits.- **`DISPLAY PANDAS | GRID (PANDAS)`** - Display the results as a PANDAS dataframe (default) or as a scrollable GRID- **`RUNTIME n (1)`** - When using the timer option on a SQL statement, the statement will execute for **`n`** number of seconds. The result that is returned is the number of times the SQL statement executed rather than the execution time of the statement. The default value for runtime is one second, so if the SQL is very complex you will need to increase the run time.- **`LIST`** - Display the current settingsTo set an option use the following syntax:```%sql option option_name value option_name value ....```The following example sets all options:```%sql option maxrows 100 runtime 2 display grid maxgrid 10```The values will **not** be saved between Jupyter notebooks sessions. If you need to retrieve the current options values, use the LIST command as the only argument:```%sql option list```
###Code
def setOptions(inSQL):
global _settings, _display
cParms = inSQL.split()
cnt = 0
while cnt < len(cParms):
if cParms[cnt].upper() == 'MAXROWS':
if cnt+1 < len(cParms):
try:
_settings["maxrows"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid MAXROWS value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'MAXGRID':
if cnt+1 < len(cParms):
try:
maxgrid = int(cParms[cnt+1])
if (maxgrid <= 5): # Minimum window size is 5
maxgrid = 5
_display["maxVisibleRows"] = int(cParms[cnt+1])
try:
import qgrid
qgrid.set_defaults(grid_options=_display)
except:
_environment['qgrid'] = False
except Exception as err:
errormsg("Invalid MAXGRID value provided.")
pass
cnt = cnt + 1
else:
errormsg("No maximum rows specified for the MAXROWS option.")
return
elif cParms[cnt].upper() == 'RUNTIME':
if cnt+1 < len(cParms):
try:
_settings["runtime"] = int(cParms[cnt+1])
except Exception as err:
errormsg("Invalid RUNTIME value provided.")
pass
cnt = cnt + 1
else:
errormsg("No value provided for the RUNTIME option.")
return
elif cParms[cnt].upper() == 'DISPLAY':
if cnt+1 < len(cParms):
if (cParms[cnt+1].upper() == 'GRID'):
_settings["display"] = 'GRID'
elif (cParms[cnt+1].upper() == 'PANDAS'):
_settings["display"] = 'PANDAS'
else:
errormsg("Invalid DISPLAY value provided.")
cnt = cnt + 1
else:
errormsg("No value provided for the DISPLAY option.")
return
elif (cParms[cnt].upper() == 'LIST'):
print("(MAXROWS) Maximum number of rows displayed: " + str(_settings["maxrows"]))
print("(MAXGRID) Maximum grid display size: " + str(_settings["maxgrid"]))
print("(RUNTIME) How many seconds to a run a statement for performance testing: " + str(_settings["runtime"]))
print("(DISPLAY) Use PANDAS or GRID display format for output: " + _settings["display"])
return
else:
cnt = cnt + 1
save_settings()
###Output
_____no_output_____
###Markdown
SQL HelpThe calling format of this routine is:```sqlhelp()```This code displays help related to the %sql magic command. This help is displayed when you issue a %sql or %%sql command by itself, or use the %sql -h flag.
###Code
def sqlhelp():
global _environment
if (_environment["jupyter"] == True):
sd = '<td style="text-align:left;">'
ed1 = '</td>'
ed2 = '</td>'
sh = '<th style="text-align:left;">'
eh1 = '</th>'
eh2 = '</th>'
sr = '<tr>'
er = '</tr>'
helpSQL = """
<h3>SQL Options</h3>
<p>The following options are available as part of a SQL statement. The options are always preceded with a
minus sign (i.e. -q).
<table>
{sr}
{sh}Option{eh1}{sh}Description{eh2}
{er}
{sr}
{sd}a, all{ed1}{sd}Return all rows in answer set and do not limit display{ed2}
{er}
{sr}
{sd}d{ed1}{sd}Change SQL delimiter to "@" from ";"{ed2}
{er}
{sr}
{sd}e, echo{ed1}{sd}Echo the SQL command that was generated after macro and variable substituion.{ed2}
{er}
{sr}
{sd}h, help{ed1}{sd}Display %sql help information.{ed2}
{er}
{sr}
{sd}j{ed1}{sd}Create a pretty JSON representation. Only the first column is formatted{ed2}
{er}
{sr}
{sd}json{ed1}{sd}Retrieve the result set as a JSON record{ed2}
{er}
{sr}
{sd}pb, bar{ed1}{sd}Plot the results as a bar chart{ed2}
{er}
{sr}
{sd}pl, line{ed1}{sd}Plot the results as a line chart{ed2}
{er}
{sr}
{sd}pp, pie{ed1}{sd}Plot Pie: Plot the results as a pie chart{ed2}
{er}
{sr}
{sd}q, quiet{ed1}{sd}Quiet results - no answer set or messages returned from the function{ed2}
{er}
{sr}
{sd}r, array{ed1}{sd}Return the result set as an array of values{ed2}
{er}
{sr}
{sd}sampledata{ed1}{sd}Create and load the EMPLOYEE and DEPARTMENT tables{ed2}
{er}
{sr}
{sd}t,time{ed1}{sd}Time the following SQL statement and return the number of times it executes in 1 second{ed2}
{er}
{sr}
{sd}grid{ed1}{sd}Display the results in a scrollable grid{ed2}
{er}
</table>
"""
else:
helpSQL = """
SQL Options
The following options are available as part of a SQL statement. Options are always
preceded with a minus sign (i.e. -q).
Option Description
a, all Return all rows in answer set and do not limit display
d Change SQL delimiter to "@" from ";"
e, echo Echo the SQL command that was generated after substitution
h, help Display %sql help information
j Create a pretty JSON representation. Only the first column is formatted
json Retrieve the result set as a JSON record
pb, bar Plot the results as a bar chart
pl, line Plot the results as a line chart
pp, pie Plot Pie: Plot the results as a pie chart
q, quiet Quiet results - no answer set or messages returned from the function
r, array Return the result set as an array of values
sampledata Create and load the EMPLOYEE and DEPARTMENT tables
t,time Time the SQL statement and return the execution count per second
grid Display the results in a scrollable grid
"""
helpSQL = helpSQL.format(**locals())
if (_environment["jupyter"] == True):
pdisplay(pHTML(helpSQL))
else:
print(helpSQL)
###Output
_____no_output_____
###Markdown
Connection HelpThe calling format of this routine is:```connected_help()```This code displays help related to the CONNECT command. This code is displayed when you issue a %sql CONNECT command with no arguments or you are running a SQL statement and there isn't any connection to a database yet.
###Code
def connected_help():
sd = '<td style="text-align:left;">'
ed = '</td>'
sh = '<th style="text-align:left;">'
eh = '</th>'
sr = '<tr>'
er = '</tr>'
if (_environment['jupyter'] == True):
helpConnect = """
<h3>Connecting to Db2</h3>
<p>The CONNECT command has the following format:
<p>
<pre>
%sql CONNECT TO <database> USER <userid> USING <password|?> HOST <ip address> PORT <port number> <SSL>
%sql CONNECT CREDENTIALS <varname>
%sql CONNECT CLOSE
%sql CONNECT RESET
%sql CONNECT PROMPT - use this to be prompted for values
</pre>
<p>
If you use a "?" for the password field, the system will prompt you for a password. This avoids typing the
password as clear text on the screen. If a connection is not successful, the system will print the error
message associated with the connect request.
<p>
The <b>CREDENTIALS</b> option allows you to use credentials that are supplied by Db2 on Cloud instances.
The credentials can be supplied as a variable and if successful, the variable will be saved to disk
for future use. If you create another notebook and use the identical syntax, if the variable
is not defined, the contents on disk will be used as the credentials. You should assign the
credentials to a variable that represents the database (or schema) that you are communicating with.
Using familiar names makes it easier to remember the credentials when connecting.
<p>
<b>CONNECT CLOSE</b> will close the current connection, but will not reset the database parameters. This means that
if you issue the CONNECT command again, the system should be able to reconnect you to the database.
<p>
<b>CONNECT RESET</b> will close the current connection and remove any information on the connection. You will need
to issue a new CONNECT statement with all of the connection information.
<p>
If the connection is successful, the parameters are saved on your system and will be used the next time you
run an SQL statement, or when you issue the %sql CONNECT command with no parameters.
<p>If you issue CONNECT RESET, all of the current values will be deleted and you will need to
issue a new CONNECT statement.
<p>A CONNECT command without any parameters will attempt to re-connect to the previous database you
were using. If the connection could not be established, the program to prompt you for
the values. To cancel the connection attempt, enter a blank value for any of the values. The connection
panel will request the following values in order to connect to Db2:
<table>
{sr}
{sh}Setting{eh}
{sh}Description{eh}
{er}
{sr}
{sd}Database{ed}{sd}Database name you want to connect to.{ed}
{er}
{sr}
{sd}Hostname{ed}
{sd}Use localhost if Db2 is running on your own machine, but this can be an IP address or host name.
{er}
{sr}
{sd}PORT{ed}
{sd}The port to use for connecting to Db2. This is usually 50000.{ed}
{er}
{sr}
{sd}SSL{ed}
{sd}If you are connecting to a secure port (50001) with SSL then you must include this keyword in the connect string.{ed}
{sr}
{sd}Userid{ed}
{sd}The userid to use when connecting (usually DB2INST1){ed}
{er}
{sr}
{sd}Password{ed}
{sd}No password is provided so you have to enter a value{ed}
{er}
</table>
"""
else:
helpConnect = """\
Connecting to Db2
The CONNECT command has the following format:
%sql CONNECT TO database USER userid USING password | ?
HOST ip address PORT port number SSL
%sql CONNECT CREDENTIALS varname
%sql CONNECT CLOSE
%sql CONNECT RESET
If you use a "?" for the password field, the system will prompt you for a password.
This avoids typing the password as clear text on the screen. If a connection is
not successful, the system will print the error message associated with the connect
request.
The CREDENTIALS option allows you to use credentials that are supplied by Db2 on
Cloud instances. The credentials can be supplied as a variable and if successful,
the variable will be saved to disk for future use. If you create another notebook
and use the identical syntax, if the variable is not defined, the contents on disk
will be used as the credentials. You should assign the credentials to a variable
that represents the database (or schema) that you are communicating with. Using
familiar names makes it easier to remember the credentials when connecting.
CONNECT CLOSE will close the current connection, but will not reset the database
parameters. This means that if you issue the CONNECT command again, the system
should be able to reconnect you to the database.
CONNECT RESET will close the current connection and remove any information on the
connection. You will need to issue a new CONNECT statement with all of the connection
information.
If the connection is successful, the parameters are saved on your system and will be
used the next time you run an SQL statement, or when you issue the %sql CONNECT
command with no parameters. If you issue CONNECT RESET, all of the current values
will be deleted and you will need to issue a new CONNECT statement.
A CONNECT command without any parameters will attempt to re-connect to the previous
database you were using. If the connection could not be established, the program to
prompt you for the values. To cancel the connection attempt, enter a blank value for
any of the values. The connection panel will request the following values in order
to connect to Db2:
Setting Description
Database Database name you want to connect to
Hostname Use localhost if Db2 is running on your own machine, but this can
be an IP address or host name.
PORT The port to use for connecting to Db2. This is usually 50000.
Userid The userid to use when connecting (usually DB2INST1)
Password No password is provided so you have to enter a value
SSL Include this keyword to indicate you are connecting via SSL (usually port 50001)
"""
helpConnect = helpConnect.format(**locals())
if (_environment['jupyter'] == True):
pdisplay(pHTML(helpConnect))
else:
print(helpConnect)
###Output
_____no_output_____
###Markdown
Prompt for Connection InformationIf you are running an SQL statement and have not yet connected to a database, the %sql command will prompt you for connection information. In order to connect to a database, you must supply:- Database name - Host name (IP address or name)- Port number- Userid- Password- Secure socketThe routine is called without any parameters:```connected_prompt()```
###Code
# Prompt for Connection information
def connected_prompt():
global _settings
_database = ''
_hostname = ''
_port = ''
_uid = ''
_pwd = ''
_ssl = ''
print("Enter the database connection details (Any empty value will cancel the connection)")
_database = input("Enter the database name: ");
if (_database.strip() == ""): return False
_hostname = input("Enter the HOST IP address or symbolic name: ");
if (_hostname.strip() == ""): return False
_port = input("Enter the PORT number: ");
if (_port.strip() == ""): return False
_ssl = input("Is this a secure (SSL) port (y or n)");
if (_ssl.strip() == ""): return False
if (_ssl == "n"):
_ssl = ""
else:
_ssl = "Security=SSL;"
_uid = input("Enter Userid on the DB2 system: ").upper();
if (_uid.strip() == ""): return False
_pwd = getpass.getpass("Password [password]: ");
if (_pwd.strip() == ""): return False
_settings["database"] = _database.strip()
_settings["hostname"] = _hostname.strip()
_settings["port"] = _port.strip()
_settings["uid"] = _uid.strip()
_settings["pwd"] = _pwd.strip()
_settings["ssl"] = _ssl.strip()
_settings["maxrows"] = 10
_settings["maxgrid"] = 5
_settings["runtime"] = 1
return True
# Split port and IP addresses
def split_string(in_port,splitter=":"):
# Split input into an IP address and Port number
global _settings
checkports = in_port.split(splitter)
ip = checkports[0]
if (len(checkports) > 1):
port = checkports[1]
else:
port = None
return ip, port
###Output
_____no_output_____
###Markdown
Connect Syntax ParserThe parseConnect routine is used to parse the CONNECT command that the user issued within the %sql command. The format of the command is:```parseConnect(inSQL)```The inSQL string contains the CONNECT keyword with some additional parameters. The format of the CONNECT command is one of:```CONNECT RESETCONNECT CLOSECONNECT CREDENTIALS CONNECT TO database USER userid USING password HOST hostname PORT portnumber ```If you have credentials available from Db2 on Cloud, place the contents of the credentials into a variable and then use the `CONNECT CREDENTIALS ` syntax to connect to the database.In addition, supplying a question mark (?) for password will result in the program prompting you for the password rather than having it as clear text in your scripts.When all of the information is checked in the command, the db2_doConnect function is called to actually do the connection to the database.
###Code
# Parse the CONNECT statement and execute if possible
def parseConnect(inSQL,local_ns):
global _settings, _connected
_connected = False
cParms = inSQL.split()
cnt = 0
_settings["ssl"] = ""
while cnt < len(cParms):
if cParms[cnt].upper() == 'TO':
if cnt+1 < len(cParms):
_settings["database"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No database specified in the CONNECT statement")
return
elif cParms[cnt].upper() == "SSL":
_settings["ssl"] = "Security=SSL;"
cnt = cnt + 1
elif cParms[cnt].upper() == 'CREDENTIALS':
if cnt+1 < len(cParms):
credentials = cParms[cnt+1]
tempid = eval(credentials,local_ns)
if (isinstance(tempid,dict) == False):
errormsg("The CREDENTIALS variable (" + credentials + ") does not contain a valid Python dictionary (JSON object)")
return
if (tempid == None):
fname = credentials + ".pickle"
try:
with open(fname,'rb') as f:
_id = pickle.load(f)
except:
errormsg("Unable to find credential variable or file.")
return
else:
_id = tempid
try:
_settings["database"] = _id["db"]
_settings["hostname"] = _id["hostname"]
_settings["port"] = _id["port"]
_settings["uid"] = _id["username"]
_settings["pwd"] = _id["password"]
try:
fname = credentials + ".pickle"
with open(fname,'wb') as f:
pickle.dump(_id,f)
except:
errormsg("Failed trying to write Db2 Credentials.")
return
except:
errormsg("Credentials file is missing information. db/hostname/port/username/password required.")
return
else:
errormsg("No Credentials name supplied")
return
cnt = cnt + 1
elif cParms[cnt].upper() == 'USER':
if cnt+1 < len(cParms):
_settings["uid"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No userid specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'USING':
if cnt+1 < len(cParms):
_settings["pwd"] = cParms[cnt+1]
if (_settings["pwd"] == '?'):
_settings["pwd"] = getpass.getpass("Password [password]: ") or "password"
cnt = cnt + 1
else:
errormsg("No password specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'HOST':
if cnt+1 < len(cParms):
hostport = cParms[cnt+1].upper()
ip, port = split_string(hostport)
if (port == None): _settings["port"] = "50000"
_settings["hostname"] = ip
cnt = cnt + 1
else:
errormsg("No hostname specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PORT':
if cnt+1 < len(cParms):
_settings["port"] = cParms[cnt+1].upper()
cnt = cnt + 1
else:
errormsg("No port specified in the CONNECT statement")
return
elif cParms[cnt].upper() == 'PROMPT':
if (connected_prompt() == False):
print("Connection canceled.")
return
else:
cnt = cnt + 1
elif cParms[cnt].upper() in ('CLOSE','RESET') :
try:
result = ibm_db.close(_hdbc)
_hdbi.close()
except:
pass
success("Connection closed.")
if cParms[cnt].upper() == 'RESET':
_settings["database"] = ''
return
else:
cnt = cnt + 1
_ = db2_doConnect()
###Output
_____no_output_____
###Markdown
Connect to Db2The db2_doConnect routine is called when a connection needs to be established to a Db2 database. The command does not require any parameters since it relies on the settings variable which contains all of the information it needs to connect to a Db2 database.```db2_doConnect()```There are 4 additional variables that are used throughout the routines to stay connected with the Db2 database. These variables are:- hdbc - The connection handle to the database- hstmt - A statement handle used for executing SQL statements- connected - A flag that tells the program whether or not we are currently connected to a database- runtime - Used to tell %sql the length of time (default 1 second) to run a statement when timing itThe only database driver that is used in this program is the IBM DB2 ODBC DRIVER. This driver needs to be loaded on the system that is connecting to Db2. The Jupyter notebook that is built by this system installs the driver for you so you shouldn't have to do anything other than build the container.If the connection is successful, the connected flag is set to True. Any subsequent %sql call will check to see if you are connected and initiate another prompted connection if you do not have a connection to a database.
###Code
def db2_doConnect():
global _hdbc, _hdbi, _connected, _runtime
global _settings
if _connected == False:
if len(_settings["database"]) == 0:
return False
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;"
"UID={3};"
"PWD={4};{5}").format(_settings["database"],
_settings["hostname"],
_settings["port"],
_settings["uid"],
_settings["pwd"],
_settings["ssl"])
# Get a database handle (hdbc) and a statement handle (hstmt) for subsequent access to DB2
try:
_hdbc = ibm_db.connect(dsn, "", "")
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
try:
_hdbi = ibm_db_dbi.Connection(_hdbc)
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
_connected = True
# Save the values for future use
save_settings()
success("Connection successful.")
return True
###Output
_____no_output_____
###Markdown
Load/Save SettingsThere are two routines that load and save settings between Jupyter notebooks. These routines are called without any parameters.```load_settings() save_settings()```There is a global structure called settings which contains the following fields:```_settings = { "maxrows" : 10, "maxgrid" : 5, "runtime" : 1, "display" : "TEXT", "database" : "", "hostname" : "localhost", "port" : "50000", "protocol" : "TCPIP", "uid" : "DB2INST1", "pwd" : "password"}```The information in the settings structure is used for re-connecting to a database when you start up a Jupyter notebook. When the session is established for the first time, the load_settings() function is called to get the contents of the pickle file (db2connect.pickle, a Jupyter session file) that will be used for the first connection to the database. Whenever a new connection is made, the file is updated with the save_settings() function.
###Code
def load_settings():
# This routine will load the settings from the previous session if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'rb') as f:
_settings = pickle.load(f)
# Reset runtime to 1 since it would be unexpected to keep the same value between connections
_settings["runtime"] = 1
_settings["maxgrid"] = 5
except:
pass
return
def save_settings():
# This routine will save the current settings if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'wb') as f:
pickle.dump(_settings,f)
except:
errormsg("Failed trying to write Db2 Configuration Information.")
return
###Output
_____no_output_____
###Markdown
Error and Message FunctionsThere are three types of messages that are thrown by the %db2 magic command. The first routine will print out a success message with no special formatting:```success(message)```The second message is used for displaying an error message that is not associated with a SQL error. This type of error message is surrounded with a red box to highlight the problem. Note that the success message has code that has been commented out that could also show a successful return code with a green box. ```errormsg(message)```The final error message is based on an error occuring in the SQL code that was executed. This code will parse the message returned from the ibm_db interface and parse it to return only the error message portion (and not all of the wrapper code from the driver).```db2_error(quiet,connect=False)```The quiet flag is passed to the db2_error routine so that messages can be suppressed if the user wishes to ignore them with the -q flag. A good example of this is dropping a table that does not exist. We know that an error will be thrown so we can ignore it. The information that the db2_error routine gets is from the stmt_errormsg() function from within the ibm_db driver. The db2_error function should only be called after a SQL failure otherwise there will be no diagnostic information returned from stmt_errormsg().If the connect flag is True, the routine will get the SQLSTATE and SQLCODE from the connection error message rather than a statement error message.
###Code
def db2_error(quiet,connect=False):
global sqlerror, sqlcode, sqlstate, _environment
try:
if (connect == False):
errmsg = ibm_db.stmt_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
else:
errmsg = ibm_db.conn_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
sqlerror = errmsg
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
except:
errmsg = "Unknown error."
sqlcode = -99999
sqlstate = "-99999"
sqlerror = errmsg
return
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
if quiet == True: return
if (errmsg == ""): return
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html+errmsg+"</p>"))
else:
print(errmsg)
# Print out an error message
def errormsg(message):
global _environment
if (message != ""):
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + message + "</p>"))
else:
print(message)
def success(message):
if (message != ""):
print(message)
return
def debug(message,error=False):
global _environment
if (_environment["jupyter"] == True):
spacer = "<br>" + " "
else:
spacer = "\n "
if (message != ""):
lines = message.split('\n')
msg = ""
indent = 0
for line in lines:
delta = line.count("(") - line.count(")")
if (msg == ""):
msg = line
indent = indent + delta
else:
if (delta < 0): indent = indent + delta
msg = msg + spacer * (indent*2) + line
if (delta > 0): indent = indent + delta
if (indent < 0): indent = 0
if (error == True):
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
else:
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#008000; background-color:#e6ffe6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + msg + "</pre></p>"))
else:
print(msg)
return
###Output
_____no_output_____
###Markdown
Macro ProcessorA macro is used to generate SQL to be executed by overriding or creating a new keyword. For instance, the base `%sql` command does not understand the `LIST TABLES` command which is usually used in conjunction with the `CLP` processor. Rather than specifically code this in the base `db2.ipynb` file, we can create a macro that can execute this code for us.There are three routines that deal with macros. - checkMacro is used to find the macro calls in a string. All macros are sent to parseMacro for checking.- runMacro will evaluate the macro and return the string to the parse- subvars is used to track the variables used as part of a macro call.- setMacro is used to catalog a macro Set MacroThis code will catalog a macro call.
###Code
def setMacro(inSQL,parms):
global _macros
names = parms.split()
if (len(names) < 2):
errormsg("No command name supplied.")
return None
macroName = names[1].upper()
_macros[macroName] = inSQL
return
###Output
_____no_output_____
###Markdown
Check MacroThis code will check to see if there is a macro command in the SQL. It will take the SQL that is supplied and strip out three values: the first and second keywords, and the remainder of the parameters.For instance, consider the following statement:```CREATE DATABASE GEORGE options....```The name of the macro that we want to run is called `CREATE`. We know that there is a SQL command called `CREATE` but this code will call the macro first to see if needs to run any special code. For instance, `CREATE DATABASE` is not part of the `db2.ipynb` syntax, but we can add it in by using a macro.The check macro logic will strip out the subcommand (`DATABASE`) and place the remainder of the string after `DATABASE` in options.
###Code
def checkMacro(in_sql):
global _macros
if (len(in_sql) == 0): return(in_sql) # Nothing to do
tokens = parseArgs(in_sql,None) # Take the string and reduce into tokens
macro_name = tokens[0].upper() # Uppercase the name of the token
if (macro_name not in _macros):
return(in_sql) # No macro by this name so just return the string
result = runMacro(_macros[macro_name],in_sql,tokens) # Execute the macro using the tokens we found
return(result) # Runmacro will either return the original SQL or the new one
###Output
_____no_output_____
###Markdown
Split AssignmentThis routine will return the name of a variable and it's value when the format is x=y. If y is enclosed in quotes, the quotes are removed.
###Code
def splitassign(arg):
var_name = "null"
var_value = "null"
arg = arg.strip()
eq = arg.find("=")
if (eq != -1):
var_name = arg[:eq].strip()
temp_value = arg[eq+1:].strip()
if (temp_value != ""):
ch = temp_value[0]
if (ch in ["'",'"']):
if (temp_value[-1:] == ch):
var_value = temp_value[1:-1]
else:
var_value = temp_value
else:
var_value = temp_value
else:
var_value = arg
return var_name, var_value
###Output
_____no_output_____
###Markdown
Parse Args The commands that are used in the macros need to be parsed into their separate tokens. The tokens are separated by blanks and strings that enclosed in quotes are kept together.
###Code
def parseArgs(argin,_vars):
quoteChar = ""
inQuote = False
inArg = True
args = []
arg = ''
for ch in argin.lstrip():
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
arg = arg + ch #z
else:
arg = arg + ch
elif (ch == "\"" or ch == "\'"): # Do we have a quote
quoteChar = ch
arg = arg + ch #z
inQuote = True
elif (ch == " "):
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
else:
args.append("null")
arg = ""
else:
arg = arg + ch
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
return(args)
###Output
_____no_output_____
###Markdown
Run MacroThis code will execute the body of the macro and return the results for that macro call.
###Code
def runMacro(script,in_sql,tokens):
result = ""
runIT = True
code = script.split("\n")
level = 0
runlevel = [True,False,False,False,False,False,False,False,False,False]
ifcount = 0
_vars = {}
for i in range(0,len(tokens)):
vstr = str(i)
_vars[vstr] = tokens[i]
if (len(tokens) == 0):
_vars["argc"] = "0"
else:
_vars["argc"] = str(len(tokens)-1)
for line in code:
line = line.strip()
if (line == "" or line == "\n"): continue
if (line[0] == "#"): continue # A comment line starts with a # in the first position of the line
args = parseArgs(line,_vars) # Get all of the arguments
if (args[0] == "if"):
ifcount = ifcount + 1
if (runlevel[level] == False): # You can't execute this statement
continue
level = level + 1
if (len(args) < 4):
print("Macro: Incorrect number of arguments for the if clause.")
return insql
arg1 = args[1]
arg2 = args[3]
if (len(arg2) > 2):
ch1 = arg2[0]
ch2 = arg2[-1:]
if (ch1 in ['"',"'"] and ch1 == ch2):
arg2 = arg2[1:-1].strip()
op = args[2]
if (op in ["=","=="]):
if (arg1 == arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<=","=<"]):
if (arg1 <= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">=","=>"]):
if (arg1 >= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<>","!="]):
if (arg1 != arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<"]):
if (arg1 < arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">"]):
if (arg1 > arg2):
runlevel[level] = True
else:
runlevel[level] = False
else:
print("Macro: Unknown comparison operator in the if statement:" + op)
continue
elif (args[0] in ["exit","echo"] and runlevel[level] == True):
msg = ""
for msgline in args[1:]:
if (msg == ""):
msg = subvars(msgline,_vars)
else:
msg = msg + " " + subvars(msgline,_vars)
if (msg != ""):
if (args[0] == "echo"):
debug(msg,error=False)
else:
debug(msg,error=True)
if (args[0] == "exit"): return ''
elif (args[0] == "pass" and runlevel[level] == True):
pass
elif (args[0] == "var" and runlevel[level] == True):
value = ""
for val in args[2:]:
if (value == ""):
value = subvars(val,_vars)
else:
value = value + " " + subvars(val,_vars)
value.strip()
_vars[args[1]] = value
elif (args[0] == 'else'):
if (ifcount == level):
runlevel[level] = not runlevel[level]
elif (args[0] == 'return' and runlevel[level] == True):
return(result)
elif (args[0] == "endif"):
ifcount = ifcount - 1
if (ifcount < level):
level = level - 1
if (level < 0):
print("Macro: Unmatched if/endif pairs.")
return ''
else:
if (runlevel[level] == True):
if (result == ""):
result = subvars(line,_vars)
else:
result = result + "\n" + subvars(line,_vars)
return(result)
###Output
_____no_output_____
###Markdown
Substitute VarsThis routine is used by the runMacro program to track variables that are used within Macros. These are kept separate from the rest of the code.
###Code
def subvars(script,_vars):
if (_vars == None): return script
remainder = script
result = ""
done = False
while done == False:
bv = remainder.find("{")
if (bv == -1):
done = True
continue
ev = remainder.find("}")
if (ev == -1):
done = True
continue
result = result + remainder[:bv]
vvar = remainder[bv+1:ev]
remainder = remainder[ev+1:]
upper = False
allvars = False
if (vvar[0] == "^"):
upper = True
vvar = vvar[1:]
elif (vvar[0] == "*"):
vvar = vvar[1:]
allvars = True
else:
pass
if (vvar in _vars):
if (upper == True):
items = _vars[vvar].upper()
elif (allvars == True):
try:
iVar = int(vvar)
except:
return(script)
items = ""
sVar = str(iVar)
while sVar in _vars:
if (items == ""):
items = _vars[sVar]
else:
items = items + " " + _vars[sVar]
iVar = iVar + 1
sVar = str(iVar)
else:
items = _vars[vvar]
else:
if (allvars == True):
items = ""
else:
items = "null"
result = result + items
if (remainder != ""):
result = result + remainder
return(result)
###Output
_____no_output_____
###Markdown
SQL TimerThe calling format of this routine is:```count = sqlTimer(hdbc, runtime, inSQL)```This code runs the SQL string multiple times for one second (by default). The accuracy of the clock is not that great when you are running just one statement, so instead this routine will run the code multiple times for a second to give you an execution count. If you need to run the code for more than one second, the runtime value needs to be set to the number of seconds you want the code to run.The return result is always the number of times that the code executed. Note, that the program will skip reading the data if it is a SELECT statement so it doesn't included fetch time for the answer set.
###Code
def sqlTimer(hdbc, runtime, inSQL):
count = 0
t_end = time.time() + runtime
while time.time() < t_end:
try:
stmt = ibm_db.exec_immediate(hdbc,inSQL)
if (stmt == False):
db2_error(flag(["-q","-quiet"]))
return(-1)
ibm_db.free_result(stmt)
except Exception as err:
db2_error(False)
return(-1)
count = count + 1
return(count)
###Output
_____no_output_____
###Markdown
Split ArgsThis routine takes as an argument a string and then splits the arguments according to the following logic:* If the string starts with a `(` character, it will check the last character in the string and see if it is a `)` and then remove those characters* Every parameter is separated by a comma `,` and commas within quotes are ignored* Each parameter returned will have three values returned - one for the value itself, an indicator which will be either True if it was quoted, or False if not, and True or False if it is numeric.Example:``` "abcdef",abcdef,456,"856"```Three values would be returned:```[abcdef,True,False],[abcdef,False,False],[456,False,True],[856,True,False]```Any quoted string will be False for numeric. The way that the parameters are handled are up to the calling program. However, in the case of Db2, the quoted strings must be in single quotes so any quoted parameter using the double quotes `"` must be wrapped with single quotes. There is always a possibility that a string contains single quotes (i.e. O'Connor) so any substituted text should use `''` so that Db2 can properly interpret the string. This routine does not adjust the strings with quotes, and depends on the variable subtitution routine to do that.
###Code
def splitargs(arguments):
import types
# String the string and remove the ( and ) characters if they at the beginning and end of the string
results = []
step1 = arguments.strip()
if (len(step1) == 0): return(results) # Not much to do here - no args found
if (step1[0] == '('):
if (step1[-1:] == ')'):
step2 = step1[1:-1]
step2 = step2.strip()
else:
step2 = step1
else:
step2 = step1
# Now we have a string without brackets. Start scanning for commas
quoteCH = ""
pos = 0
arg = ""
args = []
while pos < len(step2):
ch = step2[pos]
if (quoteCH == ""): # Are we in a quote?
if (ch in ('"',"'")): # Check to see if we are starting a quote
quoteCH = ch
arg = arg + ch
pos += 1
elif (ch == ","): # Are we at the end of a parameter?
arg = arg.strip()
args.append(arg)
arg = ""
inarg = False
pos += 1
else: # Continue collecting the string
arg = arg + ch
pos += 1
else:
if (ch == quoteCH): # Are we at the end of a quote?
arg = arg + ch # Add the quote to the string
pos += 1 # Increment past the quote
quoteCH = "" # Stop quote checking (maybe!)
else:
pos += 1
arg = arg + ch
if (quoteCH != ""): # So we didn't end our string
arg = arg.strip()
args.append(arg)
elif (arg != ""): # Something left over as an argument
arg = arg.strip()
args.append(arg)
else:
pass
results = []
for arg in args:
result = []
if (len(arg) > 0):
if (arg[0] in ('"',"'")):
value = arg[1:-1]
isString = True
isNumber = False
else:
isString = False
isNumber = False
try:
value = eval(arg)
if (type(value) == int):
isNumber = True
elif (isinstance(value,float) == True):
isNumber = True
else:
value = arg
except:
value = arg
else:
value = ""
isString = False
isNumber = False
result = [value,isString,isNumber]
results.append(result)
return results
###Output
_____no_output_____
###Markdown
SQL ParserThe calling format of this routine is:```sql_cmd, parameter_list, encoded_sql = sqlParser(sql_input)```This code will look at the SQL string that has been passed to it and parse it into four values:- sql_cmd: First command in the list (so this may not be the actual SQL command)- parameter_list: the values of the parameters that need to passed to the execute/pandas code- encoded_sql: SQL with the parameters removed if there are any (replaced with ? markers)
###Code
def sqlParser(sqlin,local_ns):
sql_cmd = ""
encoded_sql = sqlin
firstCommand = "(?:^\s*)([a-zA-Z]+)(?:\s+.*|$)"
findFirst = re.match(firstCommand,sqlin)
if (findFirst == None): # We did not find a match so we just return the empty string
return sql_cmd, encoded_sql
cmd = findFirst.group(1)
sql_cmd = cmd.upper()
#
# Scan the input string looking for variables in the format :var. If no : is found just return.
# Var must be alpha+number+_ to be valid
#
if (':' not in sqlin): # A quick check to see if parameters are in here, but not fool-proof!
return sql_cmd, encoded_sql
inVar = False
inQuote = ""
varName = ""
encoded_sql = ""
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
PANDAS = 5
for ch in sqlin:
if (inVar == True): # We are collecting the name of a variable
if (ch.upper() in "@_ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789[]"):
varName = varName + ch
continue
else:
if (varName == ""):
encode_sql = encoded_sql + ":"
elif (varName[0] in ('[',']')):
encoded_sql = encoded_sql + ":" + varName
else:
if (ch == '.'): # If the variable name is stopped by a period, assume no quotes are used
flag_quotes = False
else:
flag_quotes = True
varValue, varType = getContents(varName,flag_quotes,local_ns)
if (varType != PANDAS and varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == RAW):
encoded_sql = encoded_sql + varValue
elif (varType == PANDAS):
insertsql = ""
coltypes = varValue.dtypes
rows, cols = varValue.shape
for row in range(0,rows):
insertrow = ""
for col in range(0, cols):
value = varValue.iloc[row][col]
if (coltypes[col] == "object"):
value = str(value)
value = addquotes(value,True)
if (insertrow == ""):
insertrow = f"{value}"
else:
insertrow = f"{insertrow},{value}"
if (insertsql == ""):
insertsql = f"({insertrow})"
else:
insertsql = f"{insertsql},({insertrow})"
encoded_sql = endcoded_sql + insertsql
elif (varType == LIST):
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
flag_quotes = True
try:
if (v.find('0x') == 0): # Just guessing this is a hex value at beginning
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
encoded_sql = encoded_sql + ch
varName = ""
inVar = False
elif (inQuote != ""):
encoded_sql = encoded_sql + ch
if (ch == inQuote): inQuote = ""
elif (ch in ("'",'"')):
encoded_sql = encoded_sql + ch
inQuote = ch
elif (ch == ":"): # This might be a variable
varName = ""
inVar = True
else:
encoded_sql = encoded_sql + ch
if (inVar == True):
varValue, varType = getContents(varName,True,local_ns) # We assume the end of a line is quoted
if (varType != PANDAS and varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == PANDAS):
insertsql = ""
coltypes = varValue.dtypes
rows, cols = varValue.shape
for row in range(0,rows):
insertrow = ""
for col in range(0, cols):
value = varValue.iloc[row][col]
if (coltypes[col] == "object"):
value = str(value)
value = addquotes(value,True)
if (insertrow == ""):
insertrow = f"{value}"
else:
insertrow = f"{insertrow},{value}"
if (insertsql == ""):
insertsql = f"({insertrow})"
else:
insertsql = f"{insertsql},({insertrow})"
encoded_sql = encoded_sql + insertsql
elif (varType == LIST):
flag_quotes = True
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
try:
if (v.find('0x') == 0): # Just guessing this is a hex value
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
return sql_cmd, encoded_sql
###Output
_____no_output_____
###Markdown
Variable Contents FunctionThe calling format of this routine is:```value = getContents(varName,quote,name_space)```This code will take the name of a variable as input and return the contents of that variable. If the variable is not found then the program will return None which is the equivalent to empty or null. Note that this function looks at the global variable pool for Python so it is possible that the wrong version of variable is returned if it is used in different functions. For this reason, any variables used in SQL statements should use a unique namimg convention if possible.The other thing that this function does is replace single quotes with two quotes. The reason for doing this is that Db2 will convert two single quotes into one quote when dealing with strings. This avoids problems when dealing with text that contains multiple quotes within the string. Note that this substitution is done only for single quote characters since the double quote character is used by Db2 for naming columns that are case sensitive or contain special characters.If the quote value is True, the field will have quotes around it. The name_space is the variables currently that are registered in Python.
###Code
def getContents(varName,flag_quotes,local_ns):
#
# Get the contents of the variable name that is passed to the routine. Only simple
# variables are checked, i.e. arrays and lists are not parsed
#
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
DICT = 4
PANDAS = 5
try:
value = eval(varName,None,local_ns) # globals()[varName] # eval(varName)
except:
return(None,STRING)
if (isinstance(value,dict) == True): # Check to see if this is JSON dictionary
return(addquotes(value,flag_quotes),STRING)
elif(isinstance(value,list) == True): # List - tricky
return(value,LIST)
elif (isinstance(value,pandas.DataFrame) == True): # Pandas dataframe
return(value,PANDAS)
elif (isinstance(value,int) == True): # Integer value
return(value,NUMBER)
elif (isinstance(value,float) == True): # Float value
return(value,NUMBER)
else:
try:
# The pattern needs to be in the first position (0 in Python terms)
if (value.find('0x') == 0): # Just guessing this is a hex value
return(value,RAW)
else:
return(addquotes(value,flag_quotes),STRING) # String
except:
return(addquotes(str(value),flag_quotes),RAW)
###Output
_____no_output_____
###Markdown
Add QuotesQuotes are a challenge when dealing with dictionaries and Db2. Db2 wants strings delimited with single quotes, while Dictionaries use double quotes. That wouldn't be a problems except imbedded single quotes within these dictionaries will cause things to fail. This routine attempts to double-quote the single quotes within the dicitonary.
###Code
def addquotes(inString,flag_quotes):
if (isinstance(inString,dict) == True): # Check to see if this is JSON dictionary
serialized = json.dumps(inString)
else:
serialized = inString
# Replace single quotes with '' (two quotes) and wrap everything in single quotes
if (flag_quotes == False):
return(serialized)
else:
return("'"+serialized.replace("'","''")+"'") # Convert single quotes to two single quotes
###Output
_____no_output_____
###Markdown
Create the SAMPLE Database TablesThe calling format of this routine is:```db2_create_sample(quiet)```There are a lot of examples that depend on the data within the SAMPLE database. If you are running these examples and the connection is not to the SAMPLE database, then this code will create the two (EMPLOYEE, DEPARTMENT) tables that are used by most examples. If the function finds that these tables already exist, then nothing is done. If the tables are missing then they will be created with the same data as in the SAMPLE database.The quiet flag tells the program not to print any messages when the creation of the tables is complete.
###Code
def db2_create_sample(quiet):
create_department = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='DEPARTMENT' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE DEPARTMENT(DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6),ADMRDEPT CHAR(3) NOT NULL)');
EXECUTE IMMEDIATE('INSERT INTO DEPARTMENT VALUES
(''A00'',''SPIFFY COMPUTER SERVICE DIV.'',''000010'',''A00''),
(''B01'',''PLANNING'',''000020'',''A00''),
(''C01'',''INFORMATION CENTER'',''000030'',''A00''),
(''D01'',''DEVELOPMENT CENTER'',NULL,''A00''),
(''D11'',''MANUFACTURING SYSTEMS'',''000060'',''D01''),
(''D21'',''ADMINISTRATION SYSTEMS'',''000070'',''D01''),
(''E01'',''SUPPORT SERVICES'',''000050'',''A00''),
(''E11'',''OPERATIONS'',''000090'',''E01''),
(''E21'',''SOFTWARE SUPPORT'',''000100'',''E01''),
(''F22'',''BRANCH OFFICE F2'',NULL,''E01''),
(''G22'',''BRANCH OFFICE G2'',NULL,''E01''),
(''H22'',''BRANCH OFFICE H2'',NULL,''E01''),
(''I22'',''BRANCH OFFICE I2'',NULL,''E01''),
(''J22'',''BRANCH OFFICE J2'',NULL,''E01'')');
END IF;
END"""
%sql -d -q {create_department}
create_employee = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='EMPLOYEE' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE EMPLOYEE(
EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1),
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3),
PHONENO CHAR(4),
HIREDATE DATE,
JOB CHAR(8),
EDLEVEL SMALLINT NOT NULL,
SEX CHAR(1),
BIRTHDATE DATE,
SALARY DECIMAL(9,2),
BONUS DECIMAL(9,2),
COMM DECIMAL(9,2)
)');
EXECUTE IMMEDIATE('INSERT INTO EMPLOYEE VALUES
(''000010'',''CHRISTINE'',''I'',''HAAS'' ,''A00'',''3978'',''1995-01-01'',''PRES '',18,''F'',''1963-08-24'',152750.00,1000.00,4220.00),
(''000020'',''MICHAEL'' ,''L'',''THOMPSON'' ,''B01'',''3476'',''2003-10-10'',''MANAGER '',18,''M'',''1978-02-02'',94250.00,800.00,3300.00),
(''000030'',''SALLY'' ,''A'',''KWAN'' ,''C01'',''4738'',''2005-04-05'',''MANAGER '',20,''F'',''1971-05-11'',98250.00,800.00,3060.00),
(''000050'',''JOHN'' ,''B'',''GEYER'' ,''E01'',''6789'',''1979-08-17'',''MANAGER '',16,''M'',''1955-09-15'',80175.00,800.00,3214.00),
(''000060'',''IRVING'' ,''F'',''STERN'' ,''D11'',''6423'',''2003-09-14'',''MANAGER '',16,''M'',''1975-07-07'',72250.00,500.00,2580.00),
(''000070'',''EVA'' ,''D'',''PULASKI'' ,''D21'',''7831'',''2005-09-30'',''MANAGER '',16,''F'',''2003-05-26'',96170.00,700.00,2893.00),
(''000090'',''EILEEN'' ,''W'',''HENDERSON'' ,''E11'',''5498'',''2000-08-15'',''MANAGER '',16,''F'',''1971-05-15'',89750.00,600.00,2380.00),
(''000100'',''THEODORE'' ,''Q'',''SPENSER'' ,''E21'',''0972'',''2000-06-19'',''MANAGER '',14,''M'',''1980-12-18'',86150.00,500.00,2092.00),
(''000110'',''VINCENZO'' ,''G'',''LUCCHESSI'' ,''A00'',''3490'',''1988-05-16'',''SALESREP'',19,''M'',''1959-11-05'',66500.00,900.00,3720.00),
(''000120'',''SEAN'' ,'' '',''O`CONNELL'' ,''A00'',''2167'',''1993-12-05'',''CLERK '',14,''M'',''1972-10-18'',49250.00,600.00,2340.00),
(''000130'',''DELORES'' ,''M'',''QUINTANA'' ,''C01'',''4578'',''2001-07-28'',''ANALYST '',16,''F'',''1955-09-15'',73800.00,500.00,1904.00),
(''000140'',''HEATHER'' ,''A'',''NICHOLLS'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''000150'',''BRUCE'' ,'' '',''ADAMSON'' ,''D11'',''4510'',''2002-02-12'',''DESIGNER'',16,''M'',''1977-05-17'',55280.00,500.00,2022.00),
(''000160'',''ELIZABETH'',''R'',''PIANKA'' ,''D11'',''3782'',''2006-10-11'',''DESIGNER'',17,''F'',''1980-04-12'',62250.00,400.00,1780.00),
(''000170'',''MASATOSHI'',''J'',''YOSHIMURA'' ,''D11'',''2890'',''1999-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',44680.00,500.00,1974.00),
(''000180'',''MARILYN'' ,''S'',''SCOUTTEN'' ,''D11'',''1682'',''2003-07-07'',''DESIGNER'',17,''F'',''1979-02-21'',51340.00,500.00,1707.00),
(''000190'',''JAMES'' ,''H'',''WALKER'' ,''D11'',''2986'',''2004-07-26'',''DESIGNER'',16,''M'',''1982-06-25'',50450.00,400.00,1636.00),
(''000200'',''DAVID'' ,'' '',''BROWN'' ,''D11'',''4501'',''2002-03-03'',''DESIGNER'',16,''M'',''1971-05-29'',57740.00,600.00,2217.00),
(''000210'',''WILLIAM'' ,''T'',''JONES'' ,''D11'',''0942'',''1998-04-11'',''DESIGNER'',17,''M'',''2003-02-23'',68270.00,400.00,1462.00),
(''000220'',''JENNIFER'' ,''K'',''LUTZ'' ,''D11'',''0672'',''1998-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',49840.00,600.00,2387.00),
(''000230'',''JAMES'' ,''J'',''JEFFERSON'' ,''D21'',''2094'',''1996-11-21'',''CLERK '',14,''M'',''1980-05-30'',42180.00,400.00,1774.00),
(''000240'',''SALVATORE'',''M'',''MARINO'' ,''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''2002-03-31'',48760.00,600.00,2301.00),
(''000250'',''DANIEL'' ,''S'',''SMITH'' ,''D21'',''0961'',''1999-10-30'',''CLERK '',15,''M'',''1969-11-12'',49180.00,400.00,1534.00),
(''000260'',''SYBIL'' ,''P'',''JOHNSON'' ,''D21'',''8953'',''2005-09-11'',''CLERK '',16,''F'',''1976-10-05'',47250.00,300.00,1380.00),
(''000270'',''MARIA'' ,''L'',''PEREZ'' ,''D21'',''9001'',''2006-09-30'',''CLERK '',15,''F'',''2003-05-26'',37380.00,500.00,2190.00),
(''000280'',''ETHEL'' ,''R'',''SCHNEIDER'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1976-03-28'',36250.00,500.00,2100.00),
(''000290'',''JOHN'' ,''R'',''PARKER'' ,''E11'',''4502'',''2006-05-30'',''OPERATOR'',12,''M'',''1985-07-09'',35340.00,300.00,1227.00),
(''000300'',''PHILIP'' ,''X'',''SMITH'' ,''E11'',''2095'',''2002-06-19'',''OPERATOR'',14,''M'',''1976-10-27'',37750.00,400.00,1420.00),
(''000310'',''MAUDE'' ,''F'',''SETRIGHT'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''000320'',''RAMLAL'' ,''V'',''MEHTA'' ,''E21'',''9990'',''1995-07-07'',''FIELDREP'',16,''M'',''1962-08-11'',39950.00,400.00,1596.00),
(''000330'',''WING'' ,'' '',''LEE'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''M'',''1971-07-18'',45370.00,500.00,2030.00),
(''000340'',''JASON'' ,''R'',''GOUNOT'' ,''E21'',''5698'',''1977-05-05'',''FIELDREP'',16,''M'',''1956-05-17'',43840.00,500.00,1907.00),
(''200010'',''DIAN'' ,''J'',''HEMMINGER'' ,''A00'',''3978'',''1995-01-01'',''SALESREP'',18,''F'',''1973-08-14'',46500.00,1000.00,4220.00),
(''200120'',''GREG'' ,'' '',''ORLANDO'' ,''A00'',''2167'',''2002-05-05'',''CLERK '',14,''M'',''1972-10-18'',39250.00,600.00,2340.00),
(''200140'',''KIM'' ,''N'',''NATZ'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''200170'',''KIYOSHI'' ,'' '',''YAMAMOTO'' ,''D11'',''2890'',''2005-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',64680.00,500.00,1974.00),
(''200220'',''REBA'' ,''K'',''JOHN'' ,''D11'',''0672'',''2005-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',69840.00,600.00,2387.00),
(''200240'',''ROBERT'' ,''M'',''MONTEVERDE'',''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''1984-03-31'',37760.00,600.00,2301.00),
(''200280'',''EILEEN'' ,''R'',''SCHWARTZ'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1966-03-28'',46250.00,500.00,2100.00),
(''200310'',''MICHELLE'' ,''F'',''SPRINGER'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''200330'',''HELENA'' ,'' '',''WONG'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''F'',''1971-07-18'',35370.00,500.00,2030.00),
(''200340'',''ROY'' ,''R'',''ALONZO'' ,''E21'',''5698'',''1997-07-05'',''FIELDREP'',16,''M'',''1956-05-17'',31840.00,500.00,1907.00)');
END IF;
END"""
%sql -d -q {create_employee}
if (quiet == False): success("Sample tables [EMPLOYEE, DEPARTMENT] created.")
###Output
_____no_output_____
###Markdown
Check optionThis function will return the original string with the option removed, and a flag or true or false of the value is found.```args, flag = checkOption(option_string, option, false_value, true_value)```Options are specified with a -x where x is the character that we are searching for. It may actually be more than one character long like -pb/-pi/etc... The false and true values are optional. By default these are the boolean values of T/F but for some options it could be a character string like ';' versus '@' for delimiters.
###Code
def checkOption(args_in, option, vFalse=False, vTrue=True):
args_out = args_in.strip()
found = vFalse
if (args_out != ""):
if (args_out.find(option) >= 0):
args_out = args_out.replace(option," ")
args_out = args_out.strip()
found = vTrue
return args_out, found
###Output
_____no_output_____
###Markdown
Plot DataThis function will plot the data that is returned from the answer set. The plot value determines how we display the data. 1=Bar, 2=Pie, 3=Line, 4=Interactive.```plotData(flag_plot, hdbi, sql, parms)```The hdbi is the ibm_db_sa handle that is used by pandas dataframes to run the sql. The parms contains any of the parameters required to run the query.
###Code
def plotData(hdbi, sql):
try:
df = pandas.read_sql(sql,hdbi)
except Exception as err:
db2_error(False)
return
if df.empty:
errormsg("No results returned")
return
col_count = len(df.columns)
if flag(["-pb","-bar"]): # Plot 1 = bar chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='bar');
_ = plt.plot();
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
df.plot(kind='bar',x=xlabel,y=ylabel);
_ = plt.plot();
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot.bar();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pp","-pie"]): # Plot 2 = pie chart
if (col_count in (1,2)):
if (col_count == 1):
df.index = df.index + 1
yname = df.columns.values[0]
_ = df.plot(kind='pie',y=yname);
else:
xlabel = df.columns.values[0]
xname = df[xlabel].tolist()
yname = df.columns.values[1]
_ = df.plot(kind='pie',y=yname,labels=xname);
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pl","-line"]): # Plot 3 = line chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='line');
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
_ = df.plot(kind='line',x=xlabel,y=ylabel) ;
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot();
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
else:
return
###Output
_____no_output_____
###Markdown
Find a ProcedureThis routine will check to see if a procedure exists with the SCHEMA/NAME (or just NAME if no schema is supplied) and returns the number of answer sets returned. Possible values are 0, 1 (or greater) or None. If None is returned then we can't find the procedure anywhere.
###Code
def findProc(procname):
global _hdbc, _hdbi, _connected, _runtime
# Split the procedure name into schema.procname if appropriate
upper_procname = procname.upper()
schema, proc = split_string(upper_procname,".") # Expect schema.procname
if (proc == None):
proc = schema
# Call ibm_db.procedures to see if the procedure does exist
schema = "%"
try:
stmt = ibm_db.procedures(_hdbc, None, schema, proc)
if (stmt == False): # Error executing the code
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
result = ibm_db.fetch_tuple(stmt)
resultsets = result[5]
if (resultsets >= 1): resultsets = 1
return resultsets
except Exception as err:
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
###Output
_____no_output_____
###Markdown
Parse Call ArgumentsThis code will parse a SQL call name(parm1,...) and return the name and the parameters in the call.
###Code
def parseCallArgs(macro):
quoteChar = ""
inQuote = False
inParm = False
ignore = False
name = ""
parms = []
parm = ''
sqlin = macro.replace("\n","")
sqlin.lstrip()
for ch in sqlin:
if (inParm == False):
# We hit a blank in the name, so ignore everything after the procedure name until a ( is found
if (ch == " "):
ignore == True
elif (ch == "("): # Now we have parameters to send to the stored procedure
inParm = True
else:
if (ignore == False): name = name + ch # The name of the procedure (and no blanks)
else:
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
else:
parm = parm + ch
elif (ch in ("\"","\'","[")): # Do we have a quote
if (ch == "["):
quoteChar = "]"
else:
quoteChar = ch
inQuote = True
elif (ch == ")"):
if (parm != ""):
parms.append(parm)
parm = ""
break
elif (ch == ","):
if (parm != ""):
parms.append(parm)
else:
parms.append("null")
parm = ""
else:
parm = parm + ch
if (inParm == True):
if (parm != ""):
parms.append(parm_value)
return(name,parms)
###Output
_____no_output_____
###Markdown
Get ColumnsGiven a statement handle, determine what the column names are or the data types.
###Code
def getColumns(stmt):
columns = []
types = []
colcount = 0
try:
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
while (colname != False):
columns.append(colname)
types.append(coltype)
colcount += 1
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
return columns,types
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Call a ProcedureThe CALL statement is used for execution of a stored procedure. The format of the CALL statement is:```CALL PROC_NAME(x,y,z,...)```Procedures allow for the return of answer sets (cursors) as well as changing the contents of the parameters being passed to the procedure. In this implementation, the CALL function is limited to returning one answer set (or nothing). If you want to use more complex stored procedures then you will have to use the native python libraries.
###Code
def parseCall(hdbc, inSQL, local_ns):
global _hdbc, _hdbi, _connected, _runtime, _environment
# Check to see if we are connected first
if (_connected == False): # Check if you are connected
db2_doConnect()
if _connected == False: return None
remainder = inSQL.strip()
procName, procArgs = parseCallArgs(remainder[5:]) # Assume that CALL ... is the format
resultsets = findProc(procName)
if (resultsets == None): return None
argvalues = []
if (len(procArgs) > 0): # We have arguments to consider
for arg in procArgs:
varname = arg
if (len(varname) > 0):
if (varname[0] == ":"):
checkvar = varname[1:]
varvalue = getContents(checkvar,True,local_ns)
if (varvalue == None):
errormsg("Variable " + checkvar + " is not defined.")
return None
argvalues.append(varvalue)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
else:
argvalues.append(None)
try:
if (len(procArgs) > 0):
argtuple = tuple(argvalues)
result = ibm_db.callproc(_hdbc,procName,argtuple)
stmt = result[0]
else:
result = ibm_db.callproc(_hdbc,procName)
stmt = result
if (resultsets != 0 and stmt != None):
columns, types = getColumns(stmt)
if (columns == None): return None
rows = []
rowlist = ibm_db.fetch_tuple(stmt)
while ( rowlist ) :
row = []
colcount = 0
for col in rowlist:
try:
if (types[colcount] in ["int","bigint"]):
row.append(int(col))
elif (types[colcount] in ["decimal","real"]):
row.append(float(col))
elif (types[colcount] in ["date","time","timestamp"]):
row.append(str(col))
else:
row.append(col)
except:
row.append(col)
colcount += 1
rows.append(row)
rowlist = ibm_db.fetch_tuple(stmt)
if flag(["-r","-array"]):
rows.insert(0,columns)
if len(procArgs) > 0:
allresults = []
allresults.append(rows)
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return rows
else:
df = pandas.DataFrame.from_records(rows,columns=columns)
if flag("-grid") or _settings['display'] == 'GRID':
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
return df
else:
if len(procArgs) > 0:
allresults = []
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return None
except Exception as err:
db2_error(False)
return None
###Output
_____no_output_____
###Markdown
Parse Prepare/ExecuteThe PREPARE statement is used for repeated execution of a SQL statement. The PREPARE statement has the format:```stmt = PREPARE SELECT EMPNO FROM EMPLOYEE WHERE WORKDEPT=? AND SALARY<?```The SQL statement that you want executed is placed after the PREPARE statement with the location of variables marked with ? (parameter) markers. The variable stmt contains the prepared statement that need to be passed to the EXECUTE statement. The EXECUTE statement has the format:```EXECUTE :x USING z, y, s ```The first variable (:x) is the name of the variable that you assigned the results of the prepare statement. The values after the USING clause are substituted into the prepare statement where the ? markers are found. If the values in USING clause are variable names (z, y, s), a **link** is created to these variables as part of the execute statement. If you use the variable subsitution form of variable name (:z, :y, :s), the **contents** of the variable are placed into the USING clause. Normally this would not make much of a difference except when you are dealing with binary strings or JSON strings where the quote characters may cause some problems when subsituted into the statement.
###Code
def parsePExec(hdbc, inSQL):
import ibm_db
global _stmt, _stmtID, _stmtSQL, sqlcode
cParms = inSQL.split()
parmCount = len(cParms)
if (parmCount == 0): return(None) # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "PREPARE"): # Prepare the following SQL
uSQL = inSQL.upper()
found = uSQL.find("PREPARE")
sql = inSQL[found+7:].strip()
try:
pattern = "\?\*[0-9]+"
findparm = re.search(pattern,sql)
while findparm != None:
found = findparm.group(0)
count = int(found[2:])
markers = ('?,' * count)[:-1]
sql = sql.replace(found,markers)
findparm = re.search(pattern,sql)
stmt = ibm_db.prepare(hdbc,sql) # Check error code here
if (stmt == False):
db2_error(False)
return(False)
stmttext = str(stmt).strip()
stmtID = stmttext[33:48].strip()
if (stmtID in _stmtID) == False:
_stmt.append(stmt) # Prepare and return STMT to caller
_stmtID.append(stmtID)
else:
stmtIX = _stmtID.index(stmtID)
_stmt[stmtiX] = stmt
return(stmtID)
except Exception as err:
print(err)
db2_error(False)
return(False)
if (keyword == "EXECUTE"): # Execute the prepare statement
if (parmCount < 2): return(False) # No stmtID available
stmtID = cParms[1].strip()
if (stmtID in _stmtID) == False:
errormsg("Prepared statement not found or invalid.")
return(False)
stmtIX = _stmtID.index(stmtID)
stmt = _stmt[stmtIX]
try:
if (parmCount == 2): # Only the statement handle available
result = ibm_db.execute(stmt) # Run it
elif (parmCount == 3): # Not quite enough arguments
errormsg("Missing or invalid USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
else:
using = cParms[2].upper()
if (using != "USING"): # Bad syntax again
errormsg("Missing USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
uSQL = inSQL.upper()
found = uSQL.find("USING")
parmString = inSQL[found+5:].strip()
parmset = splitargs(parmString)
if (len(parmset) == 0):
errormsg("Missing parameters after the USING clause.")
sqlcode = -99999
return(False)
parms = []
parm_count = 0
CONSTANT = 0
VARIABLE = 1
const = [0]
const_cnt = 0
for v in parmset:
parm_count = parm_count + 1
if (v[1] == True or v[2] == True): # v[1] true if string, v[2] true if num
parm_type = CONSTANT
const_cnt = const_cnt + 1
if (v[2] == True):
if (isinstance(v[0],int) == True): # Integer value
sql_type = ibm_db.SQL_INTEGER
elif (isinstance(v[0],float) == True): # Float value
sql_type = ibm_db.SQL_DOUBLE
else:
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
const.append(v[0])
else:
parm_type = VARIABLE
# See if the variable has a type associated with it varname@type
varset = v[0].split("@")
parm_name = varset[0]
parm_datatype = "char"
# Does the variable exist?
if (parm_name not in globals()):
errormsg("SQL Execute parameter " + parm_name + " not found")
sqlcode = -99999
return(false)
if (len(varset) > 1): # Type provided
parm_datatype = varset[1]
if (parm_datatype == "dec" or parm_datatype == "decimal"):
sql_type = ibm_db.SQL_DOUBLE
elif (parm_datatype == "bin" or parm_datatype == "binary"):
sql_type = ibm_db.SQL_BINARY
elif (parm_datatype == "int" or parm_datatype == "integer"):
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
try:
if (parm_type == VARIABLE):
result = ibm_db.bind_param(stmt, parm_count, globals()[parm_name], ibm_db.SQL_PARAM_INPUT, sql_type)
else:
result = ibm_db.bind_param(stmt, parm_count, const[const_cnt], ibm_db.SQL_PARAM_INPUT, sql_type)
except:
result = False
if (result == False):
errormsg("SQL Bind on variable " + parm_name + " failed.")
sqlcode = -99999
return(false)
result = ibm_db.execute(stmt) # ,tuple(parms))
if (result == False):
errormsg("SQL Execute failed.")
return(False)
if (ibm_db.num_fields(stmt) == 0): return(True) # Command successfully completed
return(fetchResults(stmt))
except Exception as err:
db2_error(False)
return(False)
return(False)
return(False)
###Output
_____no_output_____
###Markdown
Fetch Result SetThis code will take the stmt handle and then produce a result set of rows as either an array (`-r`,`-array`) or as an array of json records (`-json`).
###Code
def fetchResults(stmt):
global sqlcode
rows = []
columns, types = getColumns(stmt)
# By default we assume that the data will be an array
is_array = True
# Check what type of data we want returned - array or json
if (flag(["-r","-array"]) == False):
# See if we want it in JSON format, if not it remains as an array
if (flag("-json") == True):
is_array = False
# Set column names to lowercase for JSON records
if (is_array == False):
columns = [col.lower() for col in columns] # Convert to lowercase for each of access
# First row of an array has the column names in it
if (is_array == True):
rows.append(columns)
result = ibm_db.fetch_tuple(stmt)
rowcount = 0
while (result):
rowcount += 1
if (is_array == True):
row = []
else:
row = {}
colcount = 0
for col in result:
try:
if (types[colcount] in ["int","bigint"]):
if (is_array == True):
row.append(int(col))
else:
row[columns[colcount]] = int(col)
elif (types[colcount] in ["decimal","real"]):
if (is_array == True):
row.append(float(col))
else:
row[columns[colcount]] = float(col)
elif (types[colcount] in ["date","time","timestamp"]):
if (is_array == True):
row.append(str(col))
else:
row[columns[colcount]] = str(col)
else:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
except:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
colcount += 1
rows.append(row)
result = ibm_db.fetch_tuple(stmt)
if (rowcount == 0):
sqlcode = 100
else:
sqlcode = 0
return rows
###Output
_____no_output_____
###Markdown
Parse CommitThere are three possible COMMIT verbs that can bs used:- COMMIT [WORK] - Commit the work in progress - The WORK keyword is not checked for- ROLLBACK - Roll back the unit of work- AUTOCOMMIT ON/OFF - Are statements committed on or off?The statement is passed to this routine and then checked.
###Code
def parseCommit(sql):
global _hdbc, _hdbi, _connected, _runtime, _stmt, _stmtID, _stmtSQL
if (_connected == False): return # Nothing to do if we are not connected
cParms = sql.split()
if (len(cParms) == 0): return # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "COMMIT"): # Commit the work that was done
try:
result = ibm_db.commit (_hdbc) # Commit the connection
if (len(cParms) > 1):
keyword = cParms[1].upper()
if (keyword == "HOLD"):
return
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "ROLLBACK"): # Rollback the work that was done
try:
result = ibm_db.rollback(_hdbc) # Rollback the connection
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "AUTOCOMMIT"): # Is autocommit on or off
if (len(cParms) > 1):
op = cParms[1].upper() # Need ON or OFF value
else:
return
try:
if (op == "OFF"):
ibm_db.autocommit(_hdbc, False)
elif (op == "ON"):
ibm_db.autocommit (_hdbc, True)
return
except Exception as err:
db2_error(False)
return
return
###Output
_____no_output_____
###Markdown
Set FlagsThis code will take the input SQL block and update the global flag list. The global flag list is just a list of options that are set at the beginning of a code block. The absence of a flag means it is false. If it exists it is true.
###Code
def setFlags(inSQL):
global _flags
_flags = [] # Delete all of the current flag settings
pos = 0
end = len(inSQL)-1
inFlag = False
ignore = False
outSQL = ""
flag = ""
while (pos <= end):
ch = inSQL[pos]
if (ignore == True):
outSQL = outSQL + ch
else:
if (inFlag == True):
if (ch != " "):
flag = flag + ch
else:
_flags.append(flag)
inFlag = False
else:
if (ch == "-"):
flag = "-"
inFlag = True
elif (ch == ' '):
outSQL = outSQL + ch
else:
outSQL = outSQL + ch
ignore = True
pos += 1
if (inFlag == True):
_flags.append(flag)
return outSQL
###Output
_____no_output_____
###Markdown
Check to see if flag ExistsThis function determines whether or not a flag exists in the global flag array. Absence of a value means it is false. The parameter can be a single value, or an array of values.
###Code
def flag(inflag):
global _flags
if isinstance(inflag,list):
for x in inflag:
if (x in _flags):
return True
return False
else:
if (inflag in _flags):
return True
else:
return False
###Output
_____no_output_____
###Markdown
Generate a list of SQL lines based on a delimiterNote that this function will make sure that quotes are properly maintained so that delimiters inside of quoted strings do not cause errors.
###Code
def splitSQL(inputString, delimiter):
pos = 0
arg = ""
results = []
quoteCH = ""
inSQL = inputString.strip()
if (len(inSQL) == 0): return(results) # Not much to do here - no args found
while pos < len(inSQL):
ch = inSQL[pos]
pos += 1
if (ch in ('"',"'")): # Is this a quote characters?
arg = arg + ch # Keep appending the characters to the current arg
if (ch == quoteCH): # Is this quote character we are in
quoteCH = ""
elif (quoteCH == ""): # Create the quote
quoteCH = ch
else:
None
elif (quoteCH != ""): # Still in a quote
arg = arg + ch
elif (ch == delimiter): # Is there a delimiter?
results.append(arg)
arg = ""
else:
arg = arg + ch
if (arg != ""):
results.append(arg)
return(results)
###Output
_____no_output_____
###Markdown
Main %sql Magic DefinitionThe main %sql Magic logic is found in this section of code. This code will register the Magic command and allow Jupyter notebooks to interact with Db2 by using this extension.
###Code
@magics_class
class DB2(Magics):
@needs_local_scope
@line_cell_magic
def sql(self, line, cell=None, local_ns=None):
# Before we event get started, check to see if you have connected yet. Without a connection we
# can't do anything. You may have a connection request in the code, so if that is true, we run those,
# otherwise we connect immediately
# If your statement is not a connect, and you haven't connected, we need to do it for you
global _settings, _environment
global _hdbc, _hdbi, _connected, _runtime, sqlstate, sqlerror, sqlcode, sqlelapsed
# If you use %sql (line) we just run the SQL. If you use %%SQL the entire cell is run.
flag_cell = False
flag_output = False
sqlstate = "0"
sqlerror = ""
sqlcode = 0
sqlelapsed = 0
start_time = time.time()
end_time = time.time()
# Macros gets expanded before anything is done
SQL1 = setFlags(line.strip())
SQL1 = checkMacro(SQL1) # Update the SQL if any macros are in there
SQL2 = cell
if flag("-sampledata"): # Check if you only want sample data loaded
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
db2_create_sample(flag(["-q","-quiet"]))
return
if SQL1 == "?" or flag(["-h","-help"]): # Are you asking for help
sqlhelp()
return
if len(SQL1) == 0 and SQL2 == None: return # Nothing to do here
# Check for help
if SQL1.upper() == "? CONNECT": # Are you asking for help on CONNECT
connected_help()
return
sqlType,remainder = sqlParser(SQL1,local_ns) # What type of command do you have?
if (sqlType == "CONNECT"): # A connect request
parseConnect(SQL1,local_ns)
return
elif (sqlType == "DEFINE"): # Create a macro from the body
result = setMacro(SQL2,remainder)
return
elif (sqlType == "OPTION"):
setOptions(SQL1)
return
elif (sqlType == 'COMMIT' or sqlType == 'ROLLBACK' or sqlType == 'AUTOCOMMIT'):
parseCommit(remainder)
return
elif (sqlType == "PREPARE"):
pstmt = parsePExec(_hdbc, remainder)
return(pstmt)
elif (sqlType == "EXECUTE"):
result = parsePExec(_hdbc, remainder)
return(result)
elif (sqlType == "CALL"):
result = parseCall(_hdbc, remainder, local_ns)
return(result)
else:
pass
sql = SQL1
if (sql == ""): sql = SQL2
if (sql == ""): return # Nothing to do here
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
if _settings["maxrows"] == -1: # Set the return result size
pandas.reset_option('display.max_rows')
else:
pandas.options.display.max_rows = _settings["maxrows"]
runSQL = re.sub('.*?--.*$',"",sql,flags=re.M)
remainder = runSQL.replace("\n"," ")
if flag(["-d","-delim"]):
sqlLines = splitSQL(remainder,"@")
else:
sqlLines = splitSQL(remainder,";")
flag_cell = True
# For each line figure out if you run it as a command (db2) or select (sql)
for sqlin in sqlLines: # Run each command
sqlin = checkMacro(sqlin) # Update based on any macros
sqlType, sql = sqlParser(sqlin,local_ns) # Parse the SQL
if (sql.strip() == ""): continue
if flag(["-e","-echo"]): debug(sql,False)
if flag("-t"):
cnt = sqlTimer(_hdbc, _settings["runtime"], sql) # Given the sql and parameters, clock the time
if (cnt >= 0): print("Total iterations in %s second(s): %s" % (_settings["runtime"],cnt))
return(cnt)
elif flag(["-pb","-bar","-pp","-pie","-pl","-line"]): # We are plotting some results
plotData(_hdbi, sql) # Plot the data and return
return
else:
try: # See if we have an answer set
stmt = ibm_db.prepare(_hdbc,sql)
if (ibm_db.num_fields(stmt) == 0): # No, so we just execute the code
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
continue
rowcount = ibm_db.num_rows(stmt)
if (rowcount == 0 and flag(["-q","-quiet"]) == False):
errormsg("No rows found.")
continue # Continue running
elif flag(["-r","-array","-j","-json"]): # raw, json, format json
row_count = 0
resultSet = []
try:
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
return
if flag("-j"): # JSON single output
row_count = 0
json_results = []
while( ibm_db.fetch_row(stmt) ):
row_count = row_count + 1
jsonVal = ibm_db.result(stmt,0)
jsonDict = json.loads(jsonVal)
json_results.append(jsonDict)
flag_output = True
if (row_count == 0): sqlcode = 100
return(json_results)
else:
return(fetchResults(stmt))
except Exception as err:
db2_error(flag(["-q","-quiet"]))
return
else:
try:
df = pandas.read_sql(sql,_hdbi)
except Exception as err:
db2_error(False)
return
if (len(df) == 0):
sqlcode = 100
if (flag(["-q","-quiet"]) == False):
errormsg("No rows found")
continue
flag_output = True
if flag("-grid") or _settings['display'] == 'GRID': # Check to see if we can display the results
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
print(df.to_string())
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
pandas.options.display.max_rows = None
pandas.options.display.max_columns = None
return df # print(df.to_string())
else:
pandas.options.display.max_rows = _settings["maxrows"]
pandas.options.display.max_columns = None
return df # pdisplay(df) # print(df.to_string())
except:
db2_error(flag(["-q","-quiet"]))
continue # return
end_time = time.time()
sqlelapsed = end_time - start_time
if (flag_output == False and flag(["-q","-quiet"]) == False): print("Command completed.")
# Register the Magic extension in Jupyter
ip = get_ipython()
ip.register_magics(DB2)
load_settings()
success("Db2 Extensions Loaded.")
###Output
_____no_output_____
###Markdown
Pre-defined MacrosThese macros are used to simulate the LIST TABLES and DESCRIBE commands that are available from within the Db2 command line.
###Code
%%sql define LIST
#
# The LIST macro is used to list all of the tables in the current schema or for all schemas
#
var syntax Syntax: LIST TABLES [FOR ALL | FOR SCHEMA name]
#
# Only LIST TABLES is supported by this macro
#
if {^1} <> 'TABLES'
exit {syntax}
endif
#
# This SQL is a temporary table that contains the description of the different table types
#
WITH TYPES(TYPE,DESCRIPTION) AS (
VALUES
('A','Alias'),
('G','Created temporary table'),
('H','Hierarchy table'),
('L','Detached table'),
('N','Nickname'),
('S','Materialized query table'),
('T','Table'),
('U','Typed table'),
('V','View'),
('W','Typed view')
)
SELECT TABNAME, TABSCHEMA, T.DESCRIPTION FROM SYSCAT.TABLES S, TYPES T
WHERE T.TYPE = S.TYPE
#
# Case 1: No arguments - LIST TABLES
#
if {argc} == 1
AND OWNER = CURRENT USER
ORDER BY TABNAME, TABSCHEMA
return
endif
#
# Case 2: Need 3 arguments - LIST TABLES FOR ALL
#
if {argc} == 3
if {^2}&{^3} == 'FOR&ALL'
ORDER BY TABNAME, TABSCHEMA
return
endif
exit {syntax}
endif
#
# Case 3: Need FOR SCHEMA something here
#
if {argc} == 4
if {^2}&{^3} == 'FOR&SCHEMA'
AND TABSCHEMA = '{^4}'
ORDER BY TABNAME, TABSCHEMA
return
else
exit {syntax}
endif
endif
#
# Nothing matched - Error
#
exit {syntax}
%%sql define describe
#
# The DESCRIBE command can either use the syntax DESCRIBE TABLE <name> or DESCRIBE TABLE SELECT ...
#
var syntax Syntax: DESCRIBE [TABLE name | SELECT statement]
#
# Check to see what count of variables is... Must be at least 2 items DESCRIBE TABLE x or SELECT x
#
if {argc} < 2
exit {syntax}
endif
CALL ADMIN_CMD('{*0}');
###Output
_____no_output_____
###Markdown
Set the table formatting to left align a table in a cell. By default, tables are centered in a cell. Remove this cell if you don't want to change Jupyter notebook formatting for tables. In addition, we skip this code if you are running in a shell environment rather than a Jupyter notebook
###Code
#%%html
#<style>
# table {margin-left: 0 !important; text-align: left;}
#</style>
###Output
_____no_output_____ |
benchmarks/Linear regression.ipynb | ###Markdown
Linear regression This notebook compares various linear regression implementations. The dataset used is the [Toulouse bike sharing dataset](https://creme-ml.github.io/generated/creme.datasets.fetch_bikes.htmlcreme.datasets.fetch_bikes).
###Code
%load_ext watermark
%watermark --python --machine --packages creme,keras,sklearn,tensorflow,torch --datename
from creme import compose
from creme import datasets
from creme import feature_extraction
from creme import linear_model
from creme import metrics
from creme import optim
from creme import preprocessing
from creme import stats
from keras import layers
from keras import models
from keras import optimizers
from sklearn import linear_model as sk_linear_model
import torch
%run utils.py
%run wrappers.py
n_features = 6
lr = 0.005
class PyTorchNet(torch.nn.Module):
def __init__(self, n_features):
super().__init__()
self.linear = torch.nn.Linear(n_features, 1)
torch.nn.init.constant_(self.linear.weight, 0)
torch.nn.init.constant_(self.linear.bias, 0)
def forward(self, x):
return self.linear(x)
torch_model = PyTorchNet(n_features=n_features)
# Keras
inputs = layers.Input(shape=(n_features,))
predictions = layers.Dense(1, kernel_initializer='zeros', bias_initializer='zeros')(inputs)
keras_model = models.Model(inputs=inputs, outputs=predictions)
keras_model.compile(optimizer=optimizers.SGD(lr=lr), loss='mean_squared_error')
def add_hour(x):
x['hour'] = x['moment'].hour
return x
results = benchmark(
get_X_y=datasets.fetch_bikes,
n=182470,
get_pp=lambda: (
compose.Whitelister('clouds', 'humidity', 'pressure', 'temperature', 'wind') +
(
add_hour |
feature_extraction.TargetAgg(by=['station', 'hour'], how=stats.Mean())
) |
preprocessing.StandardScaler()
),
models=[
('creme', 'LinearRegression', linear_model.LinearRegression(
optimizer=optim.SGD(lr),
l2=0.,
intercept_lr=lr
)),
('scikit-learn', 'SGDRegressor', ScikitLearnRegressor(
model=sk_linear_model.SGDRegressor(
learning_rate='constant',
eta0=lr,
penalty='none'
),
)),
('PyTorch (CPU)', 'Linear', PyTorchRegressor(
network=torch_model,
loss_fn=torch.nn.MSELoss(),
optimizer=torch.optim.SGD(torch_model.parameters(), lr=lr)
)),
('Keras on Tensorflow (CPU)', 'Dense', KerasRegressor(
model=keras_model
)),
],
get_metric=metrics.MSE
)
results
###Output
_____no_output_____
###Markdown
Linear regression This notebook compares various linear regression implementations. The dataset used is the [Toulouse bike sharing dataset](https://creme-ml.github.io/generated/creme.datasets.fetch_bikes.htmlcreme.datasets.fetch_bikes).
###Code
%load_ext watermark
%watermark --python --machine --packages creme,keras,sklearn,tensorflow,torch,vowpalwabbit --datename
from creme import compose
from creme import datasets
from creme import feature_extraction
from creme import linear_model
from creme import metrics
from creme import optim
from creme import preprocessing
from creme import stats
from keras import layers
from keras import models
from keras import optimizers
from sklearn import linear_model as sk_linear_model
import torch
%run utils.py
%run wrappers.py
n_features = 6
lr = 0.005
class PyTorchNet(torch.nn.Module):
def __init__(self, n_features):
super().__init__()
self.linear = torch.nn.Linear(n_features, 1)
torch.nn.init.constant_(self.linear.weight, 0)
torch.nn.init.constant_(self.linear.bias, 0)
def forward(self, x):
return self.linear(x)
torch_model = PyTorchNet(n_features=n_features)
# Keras
inputs = layers.Input(shape=(n_features,))
predictions = layers.Dense(1, kernel_initializer='zeros', bias_initializer='zeros')(inputs)
keras_model = models.Model(inputs=inputs, outputs=predictions)
keras_model.compile(optimizer=optimizers.SGD(lr=lr), loss='mean_squared_error')
def add_hour(x):
x['hour'] = x['moment'].hour
return x
results = benchmark(
get_X_y=datasets.fetch_bikes,
n=182470,
get_pp=lambda: (
compose.Select('clouds', 'humidity', 'pressure', 'temperature', 'wind') +
(
add_hour |
feature_extraction.TargetAgg(by=['station', 'hour'], how=stats.Mean())
) |
preprocessing.StandardScaler()
),
models=[
('creme', 'LinearRegression', linear_model.LinearRegression(
optimizer=optim.SGD(lr),
l2=0.,
intercept_lr=lr
)),
('scikit-learn', 'SGDRegressor', ScikitLearnRegressor(
model=sk_linear_model.SGDRegressor(
learning_rate='constant',
eta0=lr,
penalty='none'
),
)),
('PyTorch (CPU)', 'Linear', PyTorchRegressor(
network=torch_model,
loss_fn=torch.nn.MSELoss(),
optimizer=torch.optim.SGD(torch_model.parameters(), lr=lr)
)),
('Keras on Tensorflow (CPU)', 'Dense', KerasRegressor(
model=keras_model
)),
('Vowpal Wabbit', '', VowpalWabbitRegressor(
loss_function='squared',
sgd=True,
learning_rate=lr,
adaptive=False,
normalized=False,
invariant=False,
initial_weight=0
l2=0.,
l1=0.,
power_t=0
))
],
get_metric=metrics.MSE
)
results
results
###Output
_____no_output_____
###Markdown
Linear regression This notebook compares various linear regression implementations. The dataset used is the [Toulouse bike sharing dataset](https://online-ml.github.io/generated/river.datasets.fetch_bikes.htmlriver.datasets.fetch_bikes).
###Code
%load_ext watermark
%watermark --python --machine --packages river,keras,sklearn,tensorflow,torch,vowpalwabbit --datename
from river import compose
from river import datasets
from river import feature_extraction
from river import linear_model
from river import metrics
from river import optim
from river import preprocessing
from river import stats
from keras import layers
from keras import models
from keras import optimizers
from sklearn import linear_model as sk_linear_model
import torch
%run utils.py
%run wrappers.py
n_features = 6
lr = 0.005
class PyTorchNet(torch.nn.Module):
def __init__(self, n_features):
super().__init__()
self.linear = torch.nn.Linear(n_features, 1)
torch.nn.init.constant_(self.linear.weight, 0)
torch.nn.init.constant_(self.linear.bias, 0)
def forward(self, x):
return self.linear(x)
torch_model = PyTorchNet(n_features=n_features)
# Keras
inputs = layers.Input(shape=(n_features,))
predictions = layers.Dense(1, kernel_initializer='zeros', bias_initializer='zeros')(inputs)
keras_model = models.Model(inputs=inputs, outputs=predictions)
keras_model.compile(optimizer=optimizers.SGD(lr=lr), loss='mean_squared_error')
def add_hour(x):
x['hour'] = x['moment'].hour
return x
results = benchmark(
get_X_y=datasets.fetch_bikes,
n=182470,
get_pp=lambda: (
compose.Select('clouds', 'humidity', 'pressure', 'temperature', 'wind') +
(
add_hour |
feature_extraction.TargetAgg(by=['station', 'hour'], how=stats.Mean())
) |
preprocessing.StandardScaler()
),
models=[
('river', 'LinearRegression', linear_model.LinearRegression(
optimizer=optim.SGD(lr),
l2=0.,
intercept_lr=lr
)),
('scikit-learn', 'SGDRegressor', ScikitLearnRegressor(
model=sk_linear_model.SGDRegressor(
learning_rate='constant',
eta0=lr,
penalty='none'
),
)),
('PyTorch (CPU)', 'Linear', PyTorchRegressor(
network=torch_model,
loss_fn=torch.nn.MSELoss(),
optimizer=torch.optim.SGD(torch_model.parameters(), lr=lr)
)),
('Keras on Tensorflow (CPU)', 'Dense', KerasRegressor(
model=keras_model
)),
('Vowpal Wabbit', '', VowpalWabbitRegressor(
loss_function='squared',
sgd=True,
learning_rate=lr,
adaptive=False,
normalized=False,
invariant=False,
initial_weight=0
l2=0.,
l1=0.,
power_t=0
))
],
get_metric=metrics.MSE
)
results
results
###Output
_____no_output_____
###Markdown
Linear regression This notebook compares various linear regression implementations. The dataset used is the [Toulouse bike sharing dataset](https://online-ml.github.io/generated/river.datasets.fetch_bikes.htmlriver.datasets.fetch_bikes).
###Code
%load_ext watermark
%watermark --python --machine --packages river,sklearn,torch,vowpalwabbit --datename
###Output
Tue Nov 24 2020
CPython 3.8.5
IPython 7.19.0
river 0.1.0
sklearn 0.23.2
torch 1.7.0
vowpalwabbit unknown
compiler : Clang 10.0.0
system : Darwin
release : 20.1.0
machine : x86_64
processor : i386
CPU cores : 8
interpreter: 64bit
###Markdown
Common parameters.
###Code
N_FEATURES = 6
LEARNING_RATE = 0.005
###Output
_____no_output_____
###Markdown
River model.
###Code
from river import linear_model
from river import optim
river_model = linear_model.LinearRegression(
optimizer=optim.SGD(LEARNING_RATE),
l2=0.,
intercept_lr=LEARNING_RATE
)
###Output
_____no_output_____
###Markdown
scikit-learn model.
###Code
from river import compat
from sklearn import linear_model
sklearn_model = compat.SKL2RiverRegressor(
linear_model.SGDRegressor(
learning_rate='constant',
eta0=LEARNING_RATE,
penalty='none'
)
)
###Output
_____no_output_____
###Markdown
PyTorch model.
###Code
import torch
class PyTorchNet(torch.nn.Module):
def __init__(self, n_features):
super().__init__()
self.linear = torch.nn.Linear(N_FEATURES, 1)
torch.nn.init.constant_(self.linear.weight, 0)
torch.nn.init.constant_(self.linear.bias, 0)
def forward(self, x):
return self.linear(x)
torch_model = PyTorchNet(n_features=N_FEATURES)
torch_model = compat.PyTorch2RiverRegressor(
net=torch_model,
loss_fn=torch.nn.MSELoss(),
optimizer=torch.optim.SGD(torch_model.parameters(), lr=LEARNING_RATE)
)
from river import compose
from river import datasets
from river import feature_extraction
from river import metrics
from river import preprocessing
from river import stats
%run utils.py
def add_hour(x):
x['hour'] = x['moment'].hour
return x
results = benchmark(
get_X_y=datasets.Bikes,
n=182_470,
get_pp=lambda: (
compose.Select('clouds', 'humidity', 'pressure', 'temperature', 'wind') +
(
add_hour |
feature_extraction.TargetAgg(by=['station', 'hour'], how=stats.Mean())
) |
preprocessing.StandardScaler()
),
models=[
('river', 'LinearRegression', river_model),
('scikit-learn', 'SGDRegressor', sklearn_model),
('PyTorch (CPU)', 'Linear', torch_model),
],
get_metric=metrics.MSE
)
results
###Output
_____no_output_____
###Markdown
Linear regression This notebook compares various linear regression implementations. The dataset used is the [Toulouse bike sharing dataset](https://MaxHalford.github.io/generated/creme.datasets.fetch_bikes.htmlcreme.datasets.fetch_bikes).
###Code
%load_ext watermark
%watermark --python --machine --packages creme,keras,sklearn,tensorflow,torch,vowpalwabbit --datename
from creme import compose
from creme import datasets
from creme import feature_extraction
from creme import linear_model
from creme import metrics
from creme import optim
from creme import preprocessing
from creme import stats
from keras import layers
from keras import models
from keras import optimizers
from sklearn import linear_model as sk_linear_model
import torch
%run utils.py
%run wrappers.py
n_features = 6
lr = 0.005
class PyTorchNet(torch.nn.Module):
def __init__(self, n_features):
super().__init__()
self.linear = torch.nn.Linear(n_features, 1)
torch.nn.init.constant_(self.linear.weight, 0)
torch.nn.init.constant_(self.linear.bias, 0)
def forward(self, x):
return self.linear(x)
torch_model = PyTorchNet(n_features=n_features)
# Keras
inputs = layers.Input(shape=(n_features,))
predictions = layers.Dense(1, kernel_initializer='zeros', bias_initializer='zeros')(inputs)
keras_model = models.Model(inputs=inputs, outputs=predictions)
keras_model.compile(optimizer=optimizers.SGD(lr=lr), loss='mean_squared_error')
def add_hour(x):
x['hour'] = x['moment'].hour
return x
results = benchmark(
get_X_y=datasets.fetch_bikes,
n=182470,
get_pp=lambda: (
compose.Select('clouds', 'humidity', 'pressure', 'temperature', 'wind') +
(
add_hour |
feature_extraction.TargetAgg(by=['station', 'hour'], how=stats.Mean())
) |
preprocessing.StandardScaler()
),
models=[
('creme', 'LinearRegression', linear_model.LinearRegression(
optimizer=optim.SGD(lr),
l2=0.,
intercept_lr=lr
)),
('scikit-learn', 'SGDRegressor', ScikitLearnRegressor(
model=sk_linear_model.SGDRegressor(
learning_rate='constant',
eta0=lr,
penalty='none'
),
)),
('PyTorch (CPU)', 'Linear', PyTorchRegressor(
network=torch_model,
loss_fn=torch.nn.MSELoss(),
optimizer=torch.optim.SGD(torch_model.parameters(), lr=lr)
)),
('Keras on Tensorflow (CPU)', 'Dense', KerasRegressor(
model=keras_model
)),
('Vowpal Wabbit', '', VowpalWabbitRegressor(
loss_function='squared',
sgd=True,
learning_rate=lr,
adaptive=False,
normalized=False,
invariant=False,
initial_weight=0
l2=0.,
l1=0.,
power_t=0
))
],
get_metric=metrics.MSE
)
results
results
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.